Email alert
2018 Vol. 40, No. 1
Display Method:
2018, 40(1): 1-10.
doi: 10.11999/JEIT170317
Abstract:
Attribute-Based Encryption (ABE) scheme is widely used in cloud storage, which can achieve fine-grained access control. However, the original attribute-based encryption schemes have key escrow and attribute revocation problems. To solve these problems, this paper proposes a ciphertext-based ABE scheme. In the scheme, the key escrow problem could be solved by escrow-free key issuing protocol, which is constructed using the secure two-party computation between the attribute authority and the central controller. By updating the attribute version key, the scheme can achieve attribute-level user revocation. And by central controller, the scheme can achieve system-level user revocation. In order to reduce the user,s computational burden of decryption, this scheme outsources the complicated pair operation to cloud service providers. Based on the assumption of q-Parallel BDHE, the scheme is proved that is the security of the chosen plaintext attack in the random oracle model. Finally, the efficiency and function of this scheme are analyzed theoretically and experimentally. The experimental results show that the proposed scheme does not have key escrow problem and has the higher system efficiency.
Attribute-Based Encryption (ABE) scheme is widely used in cloud storage, which can achieve fine-grained access control. However, the original attribute-based encryption schemes have key escrow and attribute revocation problems. To solve these problems, this paper proposes a ciphertext-based ABE scheme. In the scheme, the key escrow problem could be solved by escrow-free key issuing protocol, which is constructed using the secure two-party computation between the attribute authority and the central controller. By updating the attribute version key, the scheme can achieve attribute-level user revocation. And by central controller, the scheme can achieve system-level user revocation. In order to reduce the user,s computational burden of decryption, this scheme outsources the complicated pair operation to cloud service providers. Based on the assumption of q-Parallel BDHE, the scheme is proved that is the security of the chosen plaintext attack in the random oracle model. Finally, the efficiency and function of this scheme are analyzed theoretically and experimentally. The experimental results show that the proposed scheme does not have key escrow problem and has the higher system efficiency.
2018, 40(1): 11-17.
doi: 10.11999/JEIT170340
Abstract:
In order to realize the security authentication of the information transmission between vehicle nodes in vehicular Ad hoc networks, a certificateless aggregate signature scheme is designed. The proposed scheme uses certificateless cryptography, which eliminates the complex maintenance cost of certificate and solves the problem of key escrow. Communicating through pseudonyms and nodes around the roadside units generated, the conditional privacy protection is achieved for vehicle users. In the random oracle model, the scheme is proved to be existentially unforgeable against adaptive chosen message attack. Then, the efficiency of the scheme is analyzed, and the relationship between the traffic density in Vehicular Ad hoc NETwork (VANETs) environment and the time delay of message verification is simulated. The results show that the scheme satisfies the message authentication, anonymity, unforgeability and traceability, as well as the higher communication efficiency and the shorter delay of message verification, which is more suitable for dynamic vehicular Ad hoc network environment.
In order to realize the security authentication of the information transmission between vehicle nodes in vehicular Ad hoc networks, a certificateless aggregate signature scheme is designed. The proposed scheme uses certificateless cryptography, which eliminates the complex maintenance cost of certificate and solves the problem of key escrow. Communicating through pseudonyms and nodes around the roadside units generated, the conditional privacy protection is achieved for vehicle users. In the random oracle model, the scheme is proved to be existentially unforgeable against adaptive chosen message attack. Then, the efficiency of the scheme is analyzed, and the relationship between the traffic density in Vehicular Ad hoc NETwork (VANETs) environment and the time delay of message verification is simulated. The results show that the scheme satisfies the message authentication, anonymity, unforgeability and traceability, as well as the higher communication efficiency and the shorter delay of message verification, which is more suitable for dynamic vehicular Ad hoc network environment.
2018, 40(1): 18-24.
doi: 10.11999/JEIT170175
Abstract:
Considering the problem of data interaction in multi-level information environment, a multi-level interaction memory controller is designed and implemented. On the basis of the interactive model design, the overall structure of the controller is constructed. The key modules of memory system and interactive control logic are designed in detail, and a prototype system is used to complete multi-level information interaction in term of user strategy. The experimental results show that the multi-level interactive memory controller can be configured according to users actual needs, and the multi-level information interaction function can be realized, which is significant to the information hierarchical management.
Considering the problem of data interaction in multi-level information environment, a multi-level interaction memory controller is designed and implemented. On the basis of the interactive model design, the overall structure of the controller is constructed. The key modules of memory system and interactive control logic are designed in detail, and a prototype system is used to complete multi-level information interaction in term of user strategy. The experimental results show that the multi-level interactive memory controller can be configured according to users actual needs, and the multi-level information interaction function can be realized, which is significant to the information hierarchical management.
2018, 40(1): 25-34.
doi: 10.11999/JEIT170353
Abstract:
Auction based resource allocation is a major challenging problem for cloud computing. However, the existing research is based on untruthful, single resource, single requirement for the premise. In this paper, a truthful auction mechanism is designed for Virtual Resource Allocation and Payment (VRAP) in cloud computing. In this mechanism, users can submit multiple requests at one time, but only one request can be satisfied, known as multi requirements single mind. It is proved that the resource providers can obtain more social welfare under this mechanism than before, and it can guarantee the users bids are truthful. The mechanism is still compatible with the traditional auction which the user can only submit one request. For the resource allocation problem, a heuristic algorithm is proposed to get the allocation result in a short time, through the reallocation strategy, the social welfare of the cloud resource provider can be maximized. The payment algorithm takes into account critical value to ensure that the machnism is truthful. In the experiment, it is analyzed in terms of social welfare, execution time, resource utilization and so on. Experimental results show that the proposed scheme has good effect for virtual resource action.
Auction based resource allocation is a major challenging problem for cloud computing. However, the existing research is based on untruthful, single resource, single requirement for the premise. In this paper, a truthful auction mechanism is designed for Virtual Resource Allocation and Payment (VRAP) in cloud computing. In this mechanism, users can submit multiple requests at one time, but only one request can be satisfied, known as multi requirements single mind. It is proved that the resource providers can obtain more social welfare under this mechanism than before, and it can guarantee the users bids are truthful. The mechanism is still compatible with the traditional auction which the user can only submit one request. For the resource allocation problem, a heuristic algorithm is proposed to get the allocation result in a short time, through the reallocation strategy, the social welfare of the cloud resource provider can be maximized. The payment algorithm takes into account critical value to ensure that the machnism is truthful. In the experiment, it is analyzed in terms of social welfare, execution time, resource utilization and so on. Experimental results show that the proposed scheme has good effect for virtual resource action.
2018, 40(1): 35-41.
doi: 10.11999/JEIT170261
Abstract:
Pairing-friendly elliptic curves play a vital role in pairing-based cryptography. The constructionof such curves not only influences the implementation efficiency, but concerns the security of system. Though many methods for constructing such curves are introduced, most of which rely on exhaustive search. In this paper, a new systematic method is proposed for constructing such curves which converts the problem to solving equation systems, instead of exhaustive searching. The utility of the method is demonstrated by surveying such elliptic curves with embedding degree 5, 8, 10 and 12, and all kinds of families can be explained via the proposed method including complete families, complete families with variable discriminant and sparse families. Specifically, a new family of elliptic curves is found.
Pairing-friendly elliptic curves play a vital role in pairing-based cryptography. The constructionof such curves not only influences the implementation efficiency, but concerns the security of system. Though many methods for constructing such curves are introduced, most of which rely on exhaustive search. In this paper, a new systematic method is proposed for constructing such curves which converts the problem to solving equation systems, instead of exhaustive searching. The utility of the method is demonstrated by surveying such elliptic curves with embedding degree 5, 8, 10 and 12, and all kinds of families can be explained via the proposed method including complete families, complete families with variable discriminant and sparse families. Specifically, a new family of elliptic curves is found.
2018, 40(1): 42-49.
doi: 10.11999/JEIT170421
Abstract:
Low complexity and long period pseudo-random sequence is widely used in data encryption and communication systems. A method of generating pseudo-random sequence based on Residue Number System (RNS) and permutation polynomials over finite fields is proposed. This method extends several short period sequences into a long period digital pseudo-random sequence based on Chinese Remainder Theorem (CRT). Several short period sequences are generated by corresponding permutation polynomials over small finite fields parallelly, thereby reducing the bit width in hardware implementation and increased the generation speed. In order to generate long period sequences, a method to find the permutation polynomial and the the optimazition procedure for CRT are also proposed in this paper. Based on most of current hardware platforms, the proposed method can easily generate the pseudo-random sequence with period over 2100. Meanwhile, this method has large space to select polynomials. For example, 10905 permutation polynomials can be used whenq2(mod)3 and q503 . Based no Xilinx XC7Z020, it only costs 20 18 kbit BRAMs and a small amount of other resources (no multiplier) to generate a pseudo-random sequence whose period over 290, and the generation rate is over 449.236 Mbps. The results of NIST test show that the sequence has good random property and encryption performance.
Low complexity and long period pseudo-random sequence is widely used in data encryption and communication systems. A method of generating pseudo-random sequence based on Residue Number System (RNS) and permutation polynomials over finite fields is proposed. This method extends several short period sequences into a long period digital pseudo-random sequence based on Chinese Remainder Theorem (CRT). Several short period sequences are generated by corresponding permutation polynomials over small finite fields parallelly, thereby reducing the bit width in hardware implementation and increased the generation speed. In order to generate long period sequences, a method to find the permutation polynomial and the the optimazition procedure for CRT are also proposed in this paper. Based on most of current hardware platforms, the proposed method can easily generate the pseudo-random sequence with period over 2100. Meanwhile, this method has large space to select polynomials. For example, 10905 permutation polynomials can be used whenq2(mod)3 and q503 . Based no Xilinx XC7Z020, it only costs 20 18 kbit BRAMs and a small amount of other resources (no multiplier) to generate a pseudo-random sequence whose period over 290, and the generation rate is over 449.236 Mbps. The results of NIST test show that the sequence has good random property and encryption performance.
2018, 40(1): 50-56.
doi: 10.11999/JEIT170384
Abstract:
A secret key generation scheme based on a cooperative relay is proposed to improve the generated secret key rate for quasi-static channels in Internet of things. Firstly, the two legitimate nodes send training sequences to estimate the direct channel information, respectively. After that the relay employs network coding technique to participate the cooperation, and assists the two legitimate nodes to obtain the relay channels information. Finally, the two legitimate nodes agree on a secret key from the direct and relay channels information using the direct channel without the help of the relay. Security analysis results show that the scheme can improve the achievable secret key rate, and the achievable key rate increases linearly with SNR, approaching the optimal rate. Monte Carlo simulation verifies the security analysis results, and obtains that increasing the relay nodes, selecting the relay with a larger variance channel can further improve the achievable secret key rate.
A secret key generation scheme based on a cooperative relay is proposed to improve the generated secret key rate for quasi-static channels in Internet of things. Firstly, the two legitimate nodes send training sequences to estimate the direct channel information, respectively. After that the relay employs network coding technique to participate the cooperation, and assists the two legitimate nodes to obtain the relay channels information. Finally, the two legitimate nodes agree on a secret key from the direct and relay channels information using the direct channel without the help of the relay. Security analysis results show that the scheme can improve the achievable secret key rate, and the achievable key rate increases linearly with SNR, approaching the optimal rate. Monte Carlo simulation verifies the security analysis results, and obtains that increasing the relay nodes, selecting the relay with a larger variance channel can further improve the achievable secret key rate.
2018, 40(1): 57-62.
doi: 10.11999/JEIT170312
Abstract:
Large multiplier is an indispensable module in fully homomorphic encryption, while is also the most time-consuming module. Therefore, design of a large multiplier with good performance is help to promote the practical process of fully homomorphic encryption. Aimed at the demand of SSA (Sch?nhage-Strassen Algorithm) large multiplier, a 1624 bit finite field FFT based on FPGA is designed by using Verilog HDL language. By constructing the tree type large sum unit and using parallel processing method, the speed of FFT algorithm is improved effectively. And its correctness is proved by comparing with the system level simulation results in VIM compiler environment.
Large multiplier is an indispensable module in fully homomorphic encryption, while is also the most time-consuming module. Therefore, design of a large multiplier with good performance is help to promote the practical process of fully homomorphic encryption. Aimed at the demand of SSA (Sch?nhage-Strassen Algorithm) large multiplier, a 1624 bit finite field FFT based on FPGA is designed by using Verilog HDL language. By constructing the tree type large sum unit and using parallel processing method, the speed of FFT algorithm is improved effectively. And its correctness is proved by comparing with the system level simulation results in VIM compiler environment.
2018, 40(1): 63-71.
doi: 10.11999/JEIT170323
Abstract:
To overcome the problem that the performance of intrusion detection deteriorates significantly in resource-constrained wireless sensor networks, a dynamically multi-stage game model of intrusion detection is proposed. Based on the Bayesian rules and prior probability that external node is a malicious node in this stage, the posterior probability of external node and the set of node vulnerable to attack are formulated respectively. Then, the optimal defense strategy for intrusion detection is calculated accurately according to the conditions of perfect Bayesian equilibrium. On this basis, a novel scheme for intrusion detection is proposed in WSNs based on the optimal strategy of multi-stage game model. Finally, experimental results show that the developed scheme has distinct advantage in improving the success rate of detection and suppression in clustered WSNs.
To overcome the problem that the performance of intrusion detection deteriorates significantly in resource-constrained wireless sensor networks, a dynamically multi-stage game model of intrusion detection is proposed. Based on the Bayesian rules and prior probability that external node is a malicious node in this stage, the posterior probability of external node and the set of node vulnerable to attack are formulated respectively. Then, the optimal defense strategy for intrusion detection is calculated accurately according to the conditions of perfect Bayesian equilibrium. On this basis, a novel scheme for intrusion detection is proposed in WSNs based on the optimal strategy of multi-stage game model. Finally, experimental results show that the developed scheme has distinct advantage in improving the success rate of detection and suppression in clustered WSNs.
2018, 40(1): 72-78.
doi: 10.11999/JEIT170344
Abstract:
To solve the problem of error propagation and accumulation in distributed multihop iterative localization process in wireless sensor networks, the localization error influence of anchor geometry is firstly analyzed and an error control algorithm based on Geometric Dilution Of Precision (GDOP) is proposed. By designing a delicate weighting scheme, the error magnification effect of geometry is quantitatively involved in terms of weight which effectively control error propagation in every iterative step and improve distributed localization accuracy of the whole network. The performance evaluation shows that, compared with classic iterative localization algorithm based on least squared estimation and another weighted algorithm based on iterative round, localization precision of GDOP weighted algorithm is improved by 25% and 15% respectively.
To solve the problem of error propagation and accumulation in distributed multihop iterative localization process in wireless sensor networks, the localization error influence of anchor geometry is firstly analyzed and an error control algorithm based on Geometric Dilution Of Precision (GDOP) is proposed. By designing a delicate weighting scheme, the error magnification effect of geometry is quantitatively involved in terms of weight which effectively control error propagation in every iterative step and improve distributed localization accuracy of the whole network. The performance evaluation shows that, compared with classic iterative localization algorithm based on least squared estimation and another weighted algorithm based on iterative round, localization precision of GDOP weighted algorithm is improved by 25% and 15% respectively.
2018, 40(1): 79-86.
doi: 10.11999/JEIT170325
Abstract:
In order to solve the problem of low environmental adaptability, poor topology correlation and large embedding cost in virtual network embedding algorithms, an environment adaptive and joint topology aware virtual network embedding algorithm is proposed. At first, a ranking method of weighted relative entropy is proposed to quantify the nodes with multi-index and the weights are changed according to different environment. The weighted relative entropy and breadth first search algorithm are both used in virtual node ranking phase, the nearest degree is introduced into physical node ranking and all these are used to achieve the joint awareness to the virtual topology and physical topology. Finally, the k-shortest path algorithm is introduced into virtual link embedding. Simulation results show that the proposed algorithm can improve the acceptance radio and the revenue to cost ratio by adjusting the weights according to the environment.
In order to solve the problem of low environmental adaptability, poor topology correlation and large embedding cost in virtual network embedding algorithms, an environment adaptive and joint topology aware virtual network embedding algorithm is proposed. At first, a ranking method of weighted relative entropy is proposed to quantify the nodes with multi-index and the weights are changed according to different environment. The weighted relative entropy and breadth first search algorithm are both used in virtual node ranking phase, the nearest degree is introduced into physical node ranking and all these are used to achieve the joint awareness to the virtual topology and physical topology. Finally, the k-shortest path algorithm is introduced into virtual link embedding. Simulation results show that the proposed algorithm can improve the acceptance radio and the revenue to cost ratio by adjusting the weights according to the environment.
2018, 40(1): 87-94.
doi: 10.11999/JEIT170388
Abstract:
Fiber-Wireless (FiWi) hybrid network can effectively solve the problem that people need to receive high speed service anytime and anywhere. As the key node of the FiWi network, the number and location of ONUs determine the cost and performance of the network to a great extent. In order to reduce the construction cost of FiWi network and improve network performance, an ONU deployment strategy based on improved genetic algorithm is proposed in the EPON-WIMAX hybrid network. It can minimum the number of deployed ONUs and take into account the load balancing. The simulation results show that the proposed algorithm avoids sub-optimal trap and the final number of ONU is the least, while maintaining a high load balancing level with better performance.
Fiber-Wireless (FiWi) hybrid network can effectively solve the problem that people need to receive high speed service anytime and anywhere. As the key node of the FiWi network, the number and location of ONUs determine the cost and performance of the network to a great extent. In order to reduce the construction cost of FiWi network and improve network performance, an ONU deployment strategy based on improved genetic algorithm is proposed in the EPON-WIMAX hybrid network. It can minimum the number of deployed ONUs and take into account the load balancing. The simulation results show that the proposed algorithm avoids sub-optimal trap and the final number of ONU is the least, while maintaining a high load balancing level with better performance.
2018, 40(1): 95-101.
doi: 10.11999/JEIT170358
Abstract:
The LDPC decoding algorithm with improved penalty function can improve the performance of decoding algorithm based on Alternating Direction Method of Multipliers (ADMM), but it has too many parameters to be optimized and the performance improvement is limited. For this problem, by comparing it with other decoding algorithms with penalty function, it is found that the difference between them is only the update rules of variable nodes in the decoding algorithm. Therefore, a new update method for variable nodes is proposed in this paper to reduce the number of parameters and improve the decoding performance. The simulation results show that, compared with the original decoding algorithm, the decoding algorithm in this paper reduces the parameters which need to be optimized, in addition, the average number of iterations of the algorithm is less and the algorithm can achieve about 0.1 dB performance improvement.
The LDPC decoding algorithm with improved penalty function can improve the performance of decoding algorithm based on Alternating Direction Method of Multipliers (ADMM), but it has too many parameters to be optimized and the performance improvement is limited. For this problem, by comparing it with other decoding algorithms with penalty function, it is found that the difference between them is only the update rules of variable nodes in the decoding algorithm. Therefore, a new update method for variable nodes is proposed in this paper to reduce the number of parameters and improve the decoding performance. The simulation results show that, compared with the original decoding algorithm, the decoding algorithm in this paper reduces the parameters which need to be optimized, in addition, the average number of iterations of the algorithm is less and the algorithm can achieve about 0.1 dB performance improvement.
2018, 40(1): 102-107.
doi: 10.11999/JEIT170321
Abstract:
A space-time optimized Multiple-Input Multiple-Output (MIMO) wireless transmission system based on virtual channel method is proposed. At the transmitter, various space-time virtual channels are generated that are connected with the actual space wireless channels to form the cooperative space division channels. According to the feedback information from the receiver, the Bit Error Rate (BER) can be significantly improved by using the simulated annealing algorithm to optimize the virtual channels. Morever, by using the virtual channel method, it allows one MIMO antenna to transmit multiple superposed data streams in one frequency band at the same time, therefore it can transmit more number of different data streams than the number of transmit antennas, breaking the conventional way that the number of different data streams to be transmitted is equal to the number of transmit antennas. Thus, the proposed MIMO system can significantly improve the spectral efficiency. Simulation results and experimental test results based on ZC706 and AD9361 hardware platforms in microwave anechoic chamber fully demonstrate the effectiveness of the proposed MIMO system.
A space-time optimized Multiple-Input Multiple-Output (MIMO) wireless transmission system based on virtual channel method is proposed. At the transmitter, various space-time virtual channels are generated that are connected with the actual space wireless channels to form the cooperative space division channels. According to the feedback information from the receiver, the Bit Error Rate (BER) can be significantly improved by using the simulated annealing algorithm to optimize the virtual channels. Morever, by using the virtual channel method, it allows one MIMO antenna to transmit multiple superposed data streams in one frequency band at the same time, therefore it can transmit more number of different data streams than the number of transmit antennas, breaking the conventional way that the number of different data streams to be transmitted is equal to the number of transmit antennas. Thus, the proposed MIMO system can significantly improve the spectral efficiency. Simulation results and experimental test results based on ZC706 and AD9361 hardware platforms in microwave anechoic chamber fully demonstrate the effectiveness of the proposed MIMO system.
2018, 40(1): 108-115.
doi: 10.11999/JEIT170478
Abstract:
The existing researches on Coordinated Multi-Point transmission (CoMP) secure transmission in heterogeneous cellular networks mainly focus on improving the quality of the main channel to enhance security. However, CoMP also makes the average distance between base station and eavesdropper close which makes the security threat more severe. Based on secrecy guard zone, an enhanced CoMP policy is proposed in this paper. Then, the connection outage probability, secrecy outage probability and secrecy throughput are analyzed. Furthermore, the transmission power and power allocation factor are designed very carefully to maximize the secrecy throughput. Simulation results show that compared with conventional CoMP policy, the proposed policy can not only achieve non-zero secrecy throughput when faced with severe security threats (i.e. for larger eavesdropper density), but also improve the secrecy throughput of 76.1% at most when faced with small security threats (i.e. for smaller eavesdropper density).
The existing researches on Coordinated Multi-Point transmission (CoMP) secure transmission in heterogeneous cellular networks mainly focus on improving the quality of the main channel to enhance security. However, CoMP also makes the average distance between base station and eavesdropper close which makes the security threat more severe. Based on secrecy guard zone, an enhanced CoMP policy is proposed in this paper. Then, the connection outage probability, secrecy outage probability and secrecy throughput are analyzed. Furthermore, the transmission power and power allocation factor are designed very carefully to maximize the secrecy throughput. Simulation results show that compared with conventional CoMP policy, the proposed policy can not only achieve non-zero secrecy throughput when faced with severe security threats (i.e. for larger eavesdropper density), but also improve the secrecy throughput of 76.1% at most when faced with small security threats (i.e. for smaller eavesdropper density).
2018, 40(1): 116-122.
doi: 10.11999/JEIT170399
Abstract:
Three Dimension Multi-Input Multi-Output (3D-MIMO) systems can effectively improve frequency efficiency and system capacity. However, with the growing number of antennas and users, pilot sequences are non- orthogonal, which will affect the accuracy of 3D-MIMO channel estimation and increase complexity. In this paper, the structured sparseness and low rank property of 3D-MIMO channel are studied. By taking advantage of these properties, a channel estimation algorithm is proposed, and the convergence and complexity of the algorithm are analyzed. Simulation results verify that the proposed algorithm can accurately recover 3D-MIMO channel with low complexity.
Three Dimension Multi-Input Multi-Output (3D-MIMO) systems can effectively improve frequency efficiency and system capacity. However, with the growing number of antennas and users, pilot sequences are non- orthogonal, which will affect the accuracy of 3D-MIMO channel estimation and increase complexity. In this paper, the structured sparseness and low rank property of 3D-MIMO channel are studied. By taking advantage of these properties, a channel estimation algorithm is proposed, and the convergence and complexity of the algorithm are analyzed. Simulation results verify that the proposed algorithm can accurately recover 3D-MIMO channel with low complexity.
2018, 40(1): 123-129.
doi: 10.11999/JEIT170309
Abstract:
The non-asymptotic spectral theory of random matrix is applied to cooperative spectrum sensing, the maximum eigenvalue and the minimum eigenvalue of the sampled signal covariance matrix are analyzed and an Exact Maximum Minimum Eigenvalues Difference (EMMED) algorithm is proposed. For any given numbers of cooperative users K and sampling points N, the exact Probability Density Function (PDF) and Cumulative Distribution Function (CDF) of the difference between the maximum and minimum eigenvalues are derived. Then, an accurate decision threshold is designed by using the distribution function. Theoretical analysis shows, the EMMED algorithm has the characteristics that the decision threshold is more accurate than the existing Asymptotic Maximum Minimum Eigenvalue Difference (AMMED) algorithm, without the characteristics of the main user signal and not affected by noise uncertainty. In addition, the simulation results show that the EMMED algorithm has better detection performance than the existing Exact Maximum Eigenvalue (EME) and EMMER algorithms in the real sensing environment with noisy uncertainty.
The non-asymptotic spectral theory of random matrix is applied to cooperative spectrum sensing, the maximum eigenvalue and the minimum eigenvalue of the sampled signal covariance matrix are analyzed and an Exact Maximum Minimum Eigenvalues Difference (EMMED) algorithm is proposed. For any given numbers of cooperative users K and sampling points N, the exact Probability Density Function (PDF) and Cumulative Distribution Function (CDF) of the difference between the maximum and minimum eigenvalues are derived. Then, an accurate decision threshold is designed by using the distribution function. Theoretical analysis shows, the EMMED algorithm has the characteristics that the decision threshold is more accurate than the existing Asymptotic Maximum Minimum Eigenvalue Difference (AMMED) algorithm, without the characteristics of the main user signal and not affected by noise uncertainty. In addition, the simulation results show that the EMMED algorithm has better detection performance than the existing Exact Maximum Eigenvalue (EME) and EMMER algorithms in the real sensing environment with noisy uncertainty.
2018, 40(1): 130-136.
doi: 10.11999/JEIT170274
Abstract:
In order to solve the problem that the sleeping cycles are difficult to be determined duo to the traffic uncertainty in dense network scenarios, this paper proposes a Micro base station sleeping cycle determination strategy which based on the Partially Observed Markov Decision Process (POMDP) traffic-aware. In this strategy, the sleeping cycle is divided into long cycle and short cycle, and each cycle consists of deep and light stage. Based on the POMDP traffic-aware, it can dynamic adjusting the cycle and determine the proper length of cycle. Both the analytical and simulation results show that compare with sleeping strategy based on the traffic threshold, the base station sleeping strategy based on traffic awareness can effectively reduce the energy consumption of the micro base stations in the dense network by adjusting the sleeping time of the micro base stations in real time.
In order to solve the problem that the sleeping cycles are difficult to be determined duo to the traffic uncertainty in dense network scenarios, this paper proposes a Micro base station sleeping cycle determination strategy which based on the Partially Observed Markov Decision Process (POMDP) traffic-aware. In this strategy, the sleeping cycle is divided into long cycle and short cycle, and each cycle consists of deep and light stage. Based on the POMDP traffic-aware, it can dynamic adjusting the cycle and determine the proper length of cycle. Both the analytical and simulation results show that compare with sleeping strategy based on the traffic threshold, the base station sleeping strategy based on traffic awareness can effectively reduce the energy consumption of the micro base stations in the dense network by adjusting the sleeping time of the micro base stations in real time.
2018, 40(1): 137-142.
doi: 10.11999/JEIT170305
Abstract:
A fast-lossless compression using texture prediction and mixed golomb coding is proposed to reduce the computational complexity while keeping high compression ratio. First, the reference pixel of the current pixel is gotten by texture direction prediction, meanwhile, the pixel difference is calculated. Then, the pixel difference is entropy coded through mixed Golomb. Thus, the compression performance is improved greatly. Simulation results show that compared with lossless frame memory compression using pixel gain prediction and dynamic order entropy coding, the proposed algorithm reduce the average coding time by 36.86%. Moreover, the average compression ratio is increased slightly in the proposed algorithm.
A fast-lossless compression using texture prediction and mixed golomb coding is proposed to reduce the computational complexity while keeping high compression ratio. First, the reference pixel of the current pixel is gotten by texture direction prediction, meanwhile, the pixel difference is calculated. Then, the pixel difference is entropy coded through mixed Golomb. Thus, the compression performance is improved greatly. Simulation results show that compared with lossless frame memory compression using pixel gain prediction and dynamic order entropy coding, the proposed algorithm reduce the average coding time by 36.86%. Moreover, the average compression ratio is increased slightly in the proposed algorithm.
2018, 40(1): 143-150.
doi: 10.11999/JEIT170165
Abstract:
Considering the influence of compression and wireless channel packet-loss on mobile video quality in wireless network, analyzing the space-time perceptual statistics of the differences between video adjacent frames, a No-reference Mobile Video Quality Assessment (NMVQA) algorithm is proposed based on video natural statistics. First, the influences of various video distortion type on the statistical characteristics of difference coefficients between video adjacent frames are analyzed in terms of the natural statistical regularities of video frame difference. Second, the temporal change of the distribution parameters with respect to the products of adjacent frame differences computed along horizontal, vertical and diagonal spatial orientations are calculated. Finally, the distortion degree of mobile video is measured by the correlation between the multi-scale temporal changes of statistical characteristics of difference coefficients between video adjacent frames. Experimental results in the LIVE mobile video database show that NMVQA is well consistent with subjective assessment results, and can reflect human subjective feeling well. NMVQA can evaluate the performance of real-time online adjustment of the source rate and wireless channel parameters.
Considering the influence of compression and wireless channel packet-loss on mobile video quality in wireless network, analyzing the space-time perceptual statistics of the differences between video adjacent frames, a No-reference Mobile Video Quality Assessment (NMVQA) algorithm is proposed based on video natural statistics. First, the influences of various video distortion type on the statistical characteristics of difference coefficients between video adjacent frames are analyzed in terms of the natural statistical regularities of video frame difference. Second, the temporal change of the distribution parameters with respect to the products of adjacent frame differences computed along horizontal, vertical and diagonal spatial orientations are calculated. Finally, the distortion degree of mobile video is measured by the correlation between the multi-scale temporal changes of statistical characteristics of difference coefficients between video adjacent frames. Experimental results in the LIVE mobile video database show that NMVQA is well consistent with subjective assessment results, and can reflect human subjective feeling well. NMVQA can evaluate the performance of real-time online adjustment of the source rate and wireless channel parameters.
2018, 40(1): 151-156.
doi: 10.11999/JEIT170311
Abstract:
Diffraction nonlocal boundary condition is one kind of the transparent boundary condition which is used in the Finite Difference (FD) Parabolic Equation (PE). The biggest advantage of the diffraction nonlocal boundary condition is that it can absorb the wave completely by using of one layer of grid. However, the computation speed is low because of the time consuming spatial convolution integrals. To solve this problem, the recursive convolution and vector fitting method are introduced to accelerate the computational speed. The diffraction nonlocal boundary combined with these two kinds of methods is called as improved diffraction nonlocal boundary condition. Based on the improved nonlocal boundary condition, it is applied to Three-Dimensional Parabolic Equation (3DPE) decomposed model. Numeric computation results demonstrate the computational accuracy and the speed of this three-dimensional parabolic equation decomposed model combined with the improved diffraction nonlocal boundary condition.
Diffraction nonlocal boundary condition is one kind of the transparent boundary condition which is used in the Finite Difference (FD) Parabolic Equation (PE). The biggest advantage of the diffraction nonlocal boundary condition is that it can absorb the wave completely by using of one layer of grid. However, the computation speed is low because of the time consuming spatial convolution integrals. To solve this problem, the recursive convolution and vector fitting method are introduced to accelerate the computational speed. The diffraction nonlocal boundary combined with these two kinds of methods is called as improved diffraction nonlocal boundary condition. Based on the improved nonlocal boundary condition, it is applied to Three-Dimensional Parabolic Equation (3DPE) decomposed model. Numeric computation results demonstrate the computational accuracy and the speed of this three-dimensional parabolic equation decomposed model combined with the improved diffraction nonlocal boundary condition.
2018, 40(1): 157-165.
doi: 10.11999/JEIT170397
Abstract:
In order to automatically determine the number of clusters in multispectral remote sensing image segmentation, Fuzzy C-Means (FCM) algorithm with unknown number of clusters is proposed. First of all, a new dissimilarity measure between a pixel and a cluster is defined. The fuzzy membership function and cluster center are obtained through minimizing the objective function. Then, the relationship between fuzzy factor and the number of clusters is studied. The optimal fuzzy factor is selected by defining the Partition Entropy (PE) index and corresponding to the minimum of fuzzy factor after the convergence of PE values. According to the relationship between the fuzzy factor and the number of clusters, the optimal number of clusters is obtained, and the variable cluster segmentation of the image is realized. The analysis based on segmentation results of synthesized image and real multispectral remote sensing images show that the proposed algorithm can automatically determine the number of clusters and obtain the ideal segmentation results simultaneously. It provides a new method for automatically determine the number of clusters of remote sensing image.
In order to automatically determine the number of clusters in multispectral remote sensing image segmentation, Fuzzy C-Means (FCM) algorithm with unknown number of clusters is proposed. First of all, a new dissimilarity measure between a pixel and a cluster is defined. The fuzzy membership function and cluster center are obtained through minimizing the objective function. Then, the relationship between fuzzy factor and the number of clusters is studied. The optimal fuzzy factor is selected by defining the Partition Entropy (PE) index and corresponding to the minimum of fuzzy factor after the convergence of PE values. According to the relationship between the fuzzy factor and the number of clusters, the optimal number of clusters is obtained, and the variable cluster segmentation of the image is realized. The analysis based on segmentation results of synthesized image and real multispectral remote sensing images show that the proposed algorithm can automatically determine the number of clusters and obtain the ideal segmentation results simultaneously. It provides a new method for automatically determine the number of clusters of remote sensing image.
2018, 40(1): 166-172.
doi: 10.11999/JEIT170254
Abstract:
For the cognitive radar system, the system parameters can be optimized to match the current environment during the detection procedure, so as to improve the radar detection performance. For the unknown target detection problem, the useful signal component estimate and its covariant matrix are updated using the current returns, then the transmit waveform and receive filter are optimized based on the information of the useful signal. A closed loop processing is formed. The two optimization approaches are proposed. For the first approach, the transmit waveform design is based on the estimate of the useful signal, and the generalized match filter is used at the receiver. For the second approach, the estimate error of the useful signal is equivalent to the signal- dependent noise, and the transmit waveform and receive filter are jointly designed. The computer simulation result show that, the proposed methods are asymptotically equivalent, and they can improve the detection performance further, compared with the performance gain of the coherent accumulation.
For the cognitive radar system, the system parameters can be optimized to match the current environment during the detection procedure, so as to improve the radar detection performance. For the unknown target detection problem, the useful signal component estimate and its covariant matrix are updated using the current returns, then the transmit waveform and receive filter are optimized based on the information of the useful signal. A closed loop processing is formed. The two optimization approaches are proposed. For the first approach, the transmit waveform design is based on the estimate of the useful signal, and the generalized match filter is used at the receiver. For the second approach, the estimate error of the useful signal is equivalent to the signal- dependent noise, and the transmit waveform and receive filter are jointly designed. The computer simulation result show that, the proposed methods are asymptotically equivalent, and they can improve the detection performance further, compared with the performance gain of the coherent accumulation.
2018, 40(1): 173-180.
doi: 10.11999/JEIT170329
Abstract:
For radar High Resolution Range Profile (HRRP) automatic target recognition, the features should be extracted with sufficient target information, high discrimination, noise robustness, and low feature vector dimension. However, radar HRRP recognition suffers from insufficient amount of information and low discrimination feature, besides the radar recognition system also need the ability of real-time processing with low dimension. To obtain features with merits of low-dimension and high-discrimination, a novel feature extraction method is designed for radar high range resolution profile, namely Kernel Principal Component Correlation and Discrimination Analysis (KPCCDA). With the proposed method, the statistical characteristics of different scatter range cells can be effectively used by Kernel Principal Component Analysis (KPCA). And the within-class correlation and between- class discrimination are maximized with linear discrimination analysis and canonical correlation analysis used. Besides, the redundancy and dimensionality of the feature vectors are reduced, yielding a lowered computational complexity to meet the storage requirement in practical radar target recognition. Experimental results with measured data validate the efficiency of the proposed method.
For radar High Resolution Range Profile (HRRP) automatic target recognition, the features should be extracted with sufficient target information, high discrimination, noise robustness, and low feature vector dimension. However, radar HRRP recognition suffers from insufficient amount of information and low discrimination feature, besides the radar recognition system also need the ability of real-time processing with low dimension. To obtain features with merits of low-dimension and high-discrimination, a novel feature extraction method is designed for radar high range resolution profile, namely Kernel Principal Component Correlation and Discrimination Analysis (KPCCDA). With the proposed method, the statistical characteristics of different scatter range cells can be effectively used by Kernel Principal Component Analysis (KPCA). And the within-class correlation and between- class discrimination are maximized with linear discrimination analysis and canonical correlation analysis used. Besides, the redundancy and dimensionality of the feature vectors are reduced, yielding a lowered computational complexity to meet the storage requirement in practical radar target recognition. Experimental results with measured data validate the efficiency of the proposed method.
2018, 40(1): 181-188.
doi: 10.11999/JEIT170253
Abstract:
To describe the effect of atmospheric conditions on the microwave propagations precisely, establish the theoretical foundation for the new applications of the atmospheric inversion by microwave links, the propagation attenuation by the absorptive gas and various atmospheric particles are investigated systematically in this paper. The absorption of main gas component in atmosphere are calculated by ITU-R model, and then based on the physical characteristics and dielectric model of different types of precipitation particles, cloud and fog particles, and sand particles, the scattering characteristics of atmospheric particles cluster at the microwave band are calculated, the effect of particle size distribution, intensity, phase, and temperature on the microwave propagation at different waveband are discussed systematically. The numerical simulation results show that there are absorption band at 60 GHz, 180 GHz, and 320 GHz due to oxygen and vapor, and the attenuation is positively related to both the vapor content and air pressure, while it is negatively related to the temperature. The microwave propagation attenuation by precipitation are mainly influenced by the precipitation intensity, particles size distribution, phase and its component rate, the water content and phase of cloud and fog are the main factors that affect the microwave attenuation, the number density, size distribution and water content of dust are the main factors that affect the microwave attenuation, while the temperature is the least factor. In sum, in order of the attenuation coefficient, it goes: blast dust, precipitation, gas absorption, water fog, ice fog, and atmospheric dust.
To describe the effect of atmospheric conditions on the microwave propagations precisely, establish the theoretical foundation for the new applications of the atmospheric inversion by microwave links, the propagation attenuation by the absorptive gas and various atmospheric particles are investigated systematically in this paper. The absorption of main gas component in atmosphere are calculated by ITU-R model, and then based on the physical characteristics and dielectric model of different types of precipitation particles, cloud and fog particles, and sand particles, the scattering characteristics of atmospheric particles cluster at the microwave band are calculated, the effect of particle size distribution, intensity, phase, and temperature on the microwave propagation at different waveband are discussed systematically. The numerical simulation results show that there are absorption band at 60 GHz, 180 GHz, and 320 GHz due to oxygen and vapor, and the attenuation is positively related to both the vapor content and air pressure, while it is negatively related to the temperature. The microwave propagation attenuation by precipitation are mainly influenced by the precipitation intensity, particles size distribution, phase and its component rate, the water content and phase of cloud and fog are the main factors that affect the microwave attenuation, the number density, size distribution and water content of dust are the main factors that affect the microwave attenuation, while the temperature is the least factor. In sum, in order of the attenuation coefficient, it goes: blast dust, precipitation, gas absorption, water fog, ice fog, and atmospheric dust.
2018, 40(1): 189-199.
doi: 10.11999/JEIT170301
Abstract:
The threshold value of Q in the post process of traditional cross section projection Otsus method is a constant, which is not universal applicability for images with different noises. To solve this problem, this paper proposes a multi-objective cross section projection Otsu's method based on memory knetic-molecular theory ptimization algorithm. Based on the maximum between-class variance criterion and the maximum Peak Signal to Noise Ratio (PSNR) criterion, a multi-objective image segmentation model is established to take into account the segmentation accuracy and anti-noise capability for image segmentation by combining threshold Q with segmentation threshold T. In order to improve the efficiency of the algorithm, a memory knetic-molecular theory optimization algorithm is proposed for the multi-objective cross section projection Otsu's method by introducing the artificial memory principles into knetic-molecular theory optimization algorithm. The experimental results show that this method has significant advantages in segmentation accuracy, anti-noise capability and robustness, and is more universal applicability for images with different noises.
The threshold value of Q in the post process of traditional cross section projection Otsus method is a constant, which is not universal applicability for images with different noises. To solve this problem, this paper proposes a multi-objective cross section projection Otsu's method based on memory knetic-molecular theory ptimization algorithm. Based on the maximum between-class variance criterion and the maximum Peak Signal to Noise Ratio (PSNR) criterion, a multi-objective image segmentation model is established to take into account the segmentation accuracy and anti-noise capability for image segmentation by combining threshold Q with segmentation threshold T. In order to improve the efficiency of the algorithm, a memory knetic-molecular theory optimization algorithm is proposed for the multi-objective cross section projection Otsu's method by introducing the artificial memory principles into knetic-molecular theory optimization algorithm. The experimental results show that this method has significant advantages in segmentation accuracy, anti-noise capability and robustness, and is more universal applicability for images with different noises.
2018, 40(1): 200-208.
doi: 10.11999/JEIT170402
Abstract:
Since the traditional events causal relation has the disadvantages of small recognition coverage, a method for causal relation extraction of Uyghur events is presented based on Bidirectional Long Short-Term Memory (BiLSTM) model. In order to make full use of the event structure information, 10 characteristics of the Uyghur events structure information are extracted based on the study of the events causal relationship and Uyghur language features; At the same time, the word embedding is introduced as the input of BiLSTM to extract the deep semantic features of the Uyghur events and Batch Normalization (BN) algorithm is usded to accelerate the convergence of BiLSTM. Finally, concatenating these two kinds of features as the input of the softmax classifier to extract the Uyghur events causal relations. This method is used in the causal relation extraction of Uyghur events, and the results show that the precision rate, the recall rate and F value can reach 89.19 %, 83.19% and 86.09 %, indicating the effectiveness and practicability of the method of causal relation extraction of Uyghur events.
Since the traditional events causal relation has the disadvantages of small recognition coverage, a method for causal relation extraction of Uyghur events is presented based on Bidirectional Long Short-Term Memory (BiLSTM) model. In order to make full use of the event structure information, 10 characteristics of the Uyghur events structure information are extracted based on the study of the events causal relationship and Uyghur language features; At the same time, the word embedding is introduced as the input of BiLSTM to extract the deep semantic features of the Uyghur events and Batch Normalization (BN) algorithm is usded to accelerate the convergence of BiLSTM. Finally, concatenating these two kinds of features as the input of the softmax classifier to extract the Uyghur events causal relations. This method is used in the causal relation extraction of Uyghur events, and the results show that the precision rate, the recall rate and F value can reach 89.19 %, 83.19% and 86.09 %, indicating the effectiveness and practicability of the method of causal relation extraction of Uyghur events.
2018, 40(1): 209-218.
doi: 10.11999/JEIT170296
Abstract:
To eliminate computation redundancy and improve speed of the basic wide line detector, a fast implementation, named randomized moving wide line detector, is proposed. Instead of moving the mask pixel by pixel to detect wide lines as did in the basic implementation, the randomized moving wide line detector places the mask in the image randomly, and then determines the mask moving strategy heuristically according to the current pixel. In this way, the mask moving is accelerated, leading to obvious decrease of computational redundancy in the basic detector. Furthermore, two early termination conditions are proposed to break out of the detecting loop based on the detection situation of wide lines. Testing images are adopted for performance evaluation of the randomized moving wide line detector. Experimental results demonstrate that the proposed detector accelerates the basic wide line detector significantly while keeping its detection performance unaffected.
To eliminate computation redundancy and improve speed of the basic wide line detector, a fast implementation, named randomized moving wide line detector, is proposed. Instead of moving the mask pixel by pixel to detect wide lines as did in the basic implementation, the randomized moving wide line detector places the mask in the image randomly, and then determines the mask moving strategy heuristically according to the current pixel. In this way, the mask moving is accelerated, leading to obvious decrease of computational redundancy in the basic detector. Furthermore, two early termination conditions are proposed to break out of the detecting loop based on the detection situation of wide lines. Testing images are adopted for performance evaluation of the randomized moving wide line detector. Experimental results demonstrate that the proposed detector accelerates the basic wide line detector significantly while keeping its detection performance unaffected.
2018, 40(1): 219-225.
doi: 10.11999/JEIT170219
Abstract:
Attaching topic features to the input of Recurrent Neural Network (RNN) models is an efficient method to leverage distant contextual information. To cope with the problem that the topic distributions may vary greatly among different documents, this paper proposes an improved topic feature using the topic distributions of documents and applies it to a recurrent Long Short-Term Memory (LSTM) language model. Experiments show that the proposed feature achieved an 11.8% relatively perplexity reduction on the Penn TreeBank (PTB) dataset, and reached 6.0% and 6.8% relative Word Error Rate (WER) reduction on the SWitch BoarD (SWBD) and Wall Street Journal (WSJ) speech recognition task respectively. On WSJ speech recognition task, RNN with this feature can reach the effect of LSTM on eval92 testset.
Attaching topic features to the input of Recurrent Neural Network (RNN) models is an efficient method to leverage distant contextual information. To cope with the problem that the topic distributions may vary greatly among different documents, this paper proposes an improved topic feature using the topic distributions of documents and applies it to a recurrent Long Short-Term Memory (LSTM) language model. Experiments show that the proposed feature achieved an 11.8% relatively perplexity reduction on the Penn TreeBank (PTB) dataset, and reached 6.0% and 6.8% relative Word Error Rate (WER) reduction on the SWitch BoarD (SWBD) and Wall Street Journal (WSJ) speech recognition task respectively. On WSJ speech recognition task, RNN with this feature can reach the effect of LSTM on eval92 testset.
2018, 40(1): 226-234.
doi: 10.11999/JEIT170306
Abstract:
For the problem of the soft spread spectrum signal Pseudo-Noise (PN) sequence is difficult to estimate by using the coding technology, a blind estimation PN sequence method of soft spread spectrum signal is proposed based on improved K-means algorithm. Firstly, the received signal is divided into continuous non-overlapping temporal vectors according to one period of PN sequence to construct observation data matrix. Secondly, the similarity measure theory is applied to find out the optimal initial clustering center point of K-means algorithm from the observed matrix. Then the number of scale of PN sequence can be estimated by searching for the maximum absolute value of the average Silhouette Coefficient (SC). Finally, the estimated clustering center point corresponding to the number of scale of PN sequence is found, the blind estimation PN sequence of the soft spread spectrum signal is further completed. The simulation results show that the proposed method improves the Signal-to-Noise Ratio (SNR) about 4 dB compared to the traditional method under the condition of the estimation error probability of PN sequence is less than 0.1. Moreover, the blind dispreading performance is also better than unmodified method under the same condition.
For the problem of the soft spread spectrum signal Pseudo-Noise (PN) sequence is difficult to estimate by using the coding technology, a blind estimation PN sequence method of soft spread spectrum signal is proposed based on improved K-means algorithm. Firstly, the received signal is divided into continuous non-overlapping temporal vectors according to one period of PN sequence to construct observation data matrix. Secondly, the similarity measure theory is applied to find out the optimal initial clustering center point of K-means algorithm from the observed matrix. Then the number of scale of PN sequence can be estimated by searching for the maximum absolute value of the average Silhouette Coefficient (SC). Finally, the estimated clustering center point corresponding to the number of scale of PN sequence is found, the blind estimation PN sequence of the soft spread spectrum signal is further completed. The simulation results show that the proposed method improves the Signal-to-Noise Ratio (SNR) about 4 dB compared to the traditional method under the condition of the estimation error probability of PN sequence is less than 0.1. Moreover, the blind dispreading performance is also better than unmodified method under the same condition.
2018, 40(1): 235-243.
doi: 10.11999/JEIT170168
Abstract:
In order to solve the defects which are poor error tolerance and large amount of calculation in current algorithms to recognize the Recursive Systematic Convolutional (RSC) encoder in Turbo codes, a new fast algorithm is proposed. Firstly, based on special structure of RSC codes, the concept named generalized code weight is defined which is more general. Secondly, the RSC polynomial database is built up, the probability distribution of generalized code weight can be analyzed under two situation whether the polynomials in database is actual polynomial, then based on distribution and Maxmin criteria, the decision threshold of the fast algorithm is deduced. Finally, the parameters can be recognized by traversing the polynomials in database and compare the corresponding generalized code weight with decision threshold. The simulation results show that theoretical analysis of the probability distribution is consistent with the simulations and the performance of error tolerant is preferable. The actual simulation show that correct rate of recognition can reach above 90% when the rate of bit error is as high as 0.09, besides the computational complexity is low.
In order to solve the defects which are poor error tolerance and large amount of calculation in current algorithms to recognize the Recursive Systematic Convolutional (RSC) encoder in Turbo codes, a new fast algorithm is proposed. Firstly, based on special structure of RSC codes, the concept named generalized code weight is defined which is more general. Secondly, the RSC polynomial database is built up, the probability distribution of generalized code weight can be analyzed under two situation whether the polynomials in database is actual polynomial, then based on distribution and Maxmin criteria, the decision threshold of the fast algorithm is deduced. Finally, the parameters can be recognized by traversing the polynomials in database and compare the corresponding generalized code weight with decision threshold. The simulation results show that theoretical analysis of the probability distribution is consistent with the simulations and the performance of error tolerant is preferable. The actual simulation show that correct rate of recognition can reach above 90% when the rate of bit error is as high as 0.09, besides the computational complexity is low.
2018, 40(1): 244-248.
doi: 10.11999/JEIT170347
Abstract:
Reinforcement learning which has self-improving and online learning properties gets the policy of tasks through the interaction with environment. But the mechanism of trial-and-error usually leads to a large number of training episodes. Knowledge includes human experience and the cognition of environment. This paper tries to introduce the qualitative rules into the reinforcement learning, and represents these rules through the cloud reasoning model. It is used as the heuristics exploration strategy to guide the action selection. Empirical evaluation is conducted in OpenAI Gym environment called CartPole-v2 and the result shows that using exploration strategy based on the cloud reasoning model significantly enhances the performance of the learning process.
Reinforcement learning which has self-improving and online learning properties gets the policy of tasks through the interaction with environment. But the mechanism of trial-and-error usually leads to a large number of training episodes. Knowledge includes human experience and the cognition of environment. This paper tries to introduce the qualitative rules into the reinforcement learning, and represents these rules through the cloud reasoning model. It is used as the heuristics exploration strategy to guide the action selection. Empirical evaluation is conducted in OpenAI Gym environment called CartPole-v2 and the result shows that using exploration strategy based on the cloud reasoning model significantly enhances the performance of the learning process.