Email alert
2017 Vol. 39, No. 4
Display Method:
2017, 39(4): 763-769.
doi: 10.11999/JEIT161077
Abstract:
In dynamic networks, detecting community structure is a complicated and vital issue. With respect to the community detection problem in dynamic networks, a novel game-theoretic algorithm based on the permanence of agents called Permanence Dynamic Game (PDG) is proposed. In PDG algorithm, each node in the dynamic network is regarded as a self-fish agent. Every agent chooses the best response strategy to select communities he will belong to according to the statuses of other agents. For the evolution of community structure in dynamic networks, the optimization strategy of configuration checking is applied. The configuration checking strategy have many improves the efficiency of the original algorithm. Finally, to verify the effectiveness and efficiency of the proposed method, the method is compared with the state-of-art community detection algorithms on real dynamic networks.
In dynamic networks, detecting community structure is a complicated and vital issue. With respect to the community detection problem in dynamic networks, a novel game-theoretic algorithm based on the permanence of agents called Permanence Dynamic Game (PDG) is proposed. In PDG algorithm, each node in the dynamic network is regarded as a self-fish agent. Every agent chooses the best response strategy to select communities he will belong to according to the statuses of other agents. For the evolution of community structure in dynamic networks, the optimization strategy of configuration checking is applied. The configuration checking strategy have many improves the efficiency of the original algorithm. Finally, to verify the effectiveness and efficiency of the proposed method, the method is compared with the state-of-art community detection algorithms on real dynamic networks.
2017, 39(4): 770-777.
doi: 10.11999/JEIT160516
Abstract:
Due to many community detection approaches regarding a community as one set of nodes which can not depict the vagueness of the community. A method based on rough set is proposed, it considers community as a lower and an upper approximation set which could depict the vagueness of the community. The method selects K nodes as the central nodes, then assembles iteratively nodes to their closest central nodes to form communities, and calculates subsequently a new central node in each community, around which to gather nodes again until convergence. Experimental results on public and synthetic networks verify the feasibility and effectiveness of the proposed method.
Due to many community detection approaches regarding a community as one set of nodes which can not depict the vagueness of the community. A method based on rough set is proposed, it considers community as a lower and an upper approximation set which could depict the vagueness of the community. The method selects K nodes as the central nodes, then assembles iteratively nodes to their closest central nodes to form communities, and calculates subsequently a new central node in each community, around which to gather nodes again until convergence. Experimental results on public and synthetic networks verify the feasibility and effectiveness of the proposed method.
2017, 39(4): 778-784.
doi: 10.11999/JEIT160605
Abstract:
Within the evolution and development of social networks, the establishment of relationships among the users is affected by various factors. By analyzing user behavior data and relationship data in social network, this study tries to detect the key factors that affect the formation of relationship among users. Firstly, considering the complex driving factors for the user relationship establishment, the factors are extracted and the impact factor functions are defined from personal attributes, friendships and community driving. Secondly, in order to quantify driving factors and assign weight, a user relationship analysis model based on the principle of maximum entropy is proposed. The model is, when choosing features, characterized by its independence from?the association among features, and can also quantify the strength of various factors that drive users to establish relationship. Furthermore, the key factors that affect the user relationship can be detected and the development trend of user relationship can be analyzed. Experimental results reveal that the proposal model can not only quantify the strength of each factor that drives relationship establishment, it can also predict the user relationship effectively.
Within the evolution and development of social networks, the establishment of relationships among the users is affected by various factors. By analyzing user behavior data and relationship data in social network, this study tries to detect the key factors that affect the formation of relationship among users. Firstly, considering the complex driving factors for the user relationship establishment, the factors are extracted and the impact factor functions are defined from personal attributes, friendships and community driving. Secondly, in order to quantify driving factors and assign weight, a user relationship analysis model based on the principle of maximum entropy is proposed. The model is, when choosing features, characterized by its independence from?the association among features, and can also quantify the strength of various factors that drive users to establish relationship. Furthermore, the key factors that affect the user relationship can be detected and the development trend of user relationship can be analyzed. Experimental results reveal that the proposal model can not only quantify the strength of each factor that drives relationship establishment, it can also predict the user relationship effectively.
2017, 39(4): 785-793.
doi: 10.11999/JEIT160940
Abstract:
In the Web2.0 era, the online social network has become an important carrier of social relationship maintenance and information dissemination in the human society because of its interactivity and instant. Therefore, it is very important to understand the behavior characteristics of social network users and its impact on online information dissemination. From the perspective of human behavior dynamics, the empirical research on the behavior of social network users in recent years is systematically reviewed. Secondly, the influence of social network user,s behavior on online information dissemination is summarized. Finally, the online social network information dissemination based on user behavior dynamics is summarized and prospected.
In the Web2.0 era, the online social network has become an important carrier of social relationship maintenance and information dissemination in the human society because of its interactivity and instant. Therefore, it is very important to understand the behavior characteristics of social network users and its impact on online information dissemination. From the perspective of human behavior dynamics, the empirical research on the behavior of social network users in recent years is systematically reviewed. Secondly, the influence of social network user,s behavior on online information dissemination is summarized. Finally, the online social network information dissemination based on user behavior dynamics is summarized and prospected.
2017, 39(4): 794-804.
doi: 10.11999/JEIT161136
Abstract:
Online social networks are now recognized as an important platform for the spread of information. A lot of effort is made to understand this phenomenon, including popularity analysis, diffusion modeling, and information source locating. This paper presents a survey of representative methods dealing with these issues and summarizes the state of the art. To facilitate future work, analytical discussion regarding their shortcomings and related open problems are provided.
Online social networks are now recognized as an important platform for the spread of information. A lot of effort is made to understand this phenomenon, including popularity analysis, diffusion modeling, and information source locating. This paper presents a survey of representative methods dealing with these issues and summarizes the state of the art. To facilitate future work, analytical discussion regarding their shortcomings and related open problems are provided.
2017, 39(4): 805-816.
doi: 10.11999/JEIT160743
Abstract:
Online social network is generating information at an explosive rate. Information competes with each other for peoples limite attention. How peoples attention to information evolves over time is referred to as the problem of popularity evolution. Popularity evolution reflects what people focus on and how information flow and diffuse. Popularity evolution prediction of online information helps the studies of information diffusion and human behaviors, assists public opinion monitoring, and brings high application value and commercial value. In recent years, researchers have gained great research achievements. However, there is still a lack of survey which reviews and summarizes existing work. This paper systematically reviews main work of popularity evolution analysis and prediction, and gives summarization to the existing methods and models. First, insight into understanding popularity evolution patterns from qualitative and quantitative perspectives is provided. How to measure factors affecting popularity evolution and to classify them in taxonomy are introduced. Third, the methods of modeling and predicting popularity evolution are categorized into three classes: previous-popularity-based, factor-based, and diffusion-based. These three classes from the following aspects are elaborated: theory, representative work, characteristic comparison, and application scope. Finally, the paper is concluded and future research directions are given according to existing work and current demands.
Online social network is generating information at an explosive rate. Information competes with each other for peoples limite attention. How peoples attention to information evolves over time is referred to as the problem of popularity evolution. Popularity evolution reflects what people focus on and how information flow and diffuse. Popularity evolution prediction of online information helps the studies of information diffusion and human behaviors, assists public opinion monitoring, and brings high application value and commercial value. In recent years, researchers have gained great research achievements. However, there is still a lack of survey which reviews and summarizes existing work. This paper systematically reviews main work of popularity evolution analysis and prediction, and gives summarization to the existing methods and models. First, insight into understanding popularity evolution patterns from qualitative and quantitative perspectives is provided. How to measure factors affecting popularity evolution and to classify them in taxonomy are introduced. Third, the methods of modeling and predicting popularity evolution are categorized into three classes: previous-popularity-based, factor-based, and diffusion-based. These three classes from the following aspects are elaborated: theory, representative work, characteristic comparison, and application scope. Finally, the paper is concluded and future research directions are given according to existing work and current demands.
2017, 39(4): 817-824.
doi: 10.11999/JEIT160583
Abstract:
The main problems in the existing access control algorithm under the environment of Heterogeneous Wireless Networks (HWNs) is to set up the connection between mobile users and wireless network through a single transmission link and the resources allocation lacks for optimizing transmission of the whole network in the access process. In order to solve the above problems, the resource allocation model and access link rate model in HWNs are analyzed, and a Multiple Link Access and Dynamic Resource Allocation (MLA-DRA) algorithm that supports multi-link access is proposed in this paper. The algorithm takes the maximum of the system transmission rate as object function, transfers the user access process into the multi-stage decision process that is mutually connected, and employs the previous resource allocation state to calculate the optimal solution of next user, thus deduces the optimal value of system transmission rate. In the simulation platform, the performance of MLA-DRA algorithm is analyzed, and is compared with other algorithms. Experimental results show that MLA-DRA algorithm can effectively utilize the system resources and improve the system transmission rate.
The main problems in the existing access control algorithm under the environment of Heterogeneous Wireless Networks (HWNs) is to set up the connection between mobile users and wireless network through a single transmission link and the resources allocation lacks for optimizing transmission of the whole network in the access process. In order to solve the above problems, the resource allocation model and access link rate model in HWNs are analyzed, and a Multiple Link Access and Dynamic Resource Allocation (MLA-DRA) algorithm that supports multi-link access is proposed in this paper. The algorithm takes the maximum of the system transmission rate as object function, transfers the user access process into the multi-stage decision process that is mutually connected, and employs the previous resource allocation state to calculate the optimal solution of next user, thus deduces the optimal value of system transmission rate. In the simulation platform, the performance of MLA-DRA algorithm is analyzed, and is compared with other algorithms. Experimental results show that MLA-DRA algorithm can effectively utilize the system resources and improve the system transmission rate.
2017, 39(4): 825-831.
doi: 10.11999/JEIT160623
Abstract:
According to the low scalability of current large-scale software defined data center network traffic routing mechanism which causes network performance bottleneck, this paper proposes a data center network Segment Routing (SR) mechanism based on OpenFlow. The mechanism distinguishes the size of the flow by making use of edge switches to conduct a data stream threshold test. In order to meet the QoS guarantee and network scalability requirements of different services, this paper proposes a segment routing algorithm for elephant flow.Finally, Mininet is utilized for experiment simulation on Fat-tree topology. Compared to the traditional ECMP algorithm and Mahout algorithm, simulation results show that the mechanism reduces the overhead of controller, and has better network throughout.
According to the low scalability of current large-scale software defined data center network traffic routing mechanism which causes network performance bottleneck, this paper proposes a data center network Segment Routing (SR) mechanism based on OpenFlow. The mechanism distinguishes the size of the flow by making use of edge switches to conduct a data stream threshold test. In order to meet the QoS guarantee and network scalability requirements of different services, this paper proposes a segment routing algorithm for elephant flow.Finally, Mininet is utilized for experiment simulation on Fat-tree topology. Compared to the traditional ECMP algorithm and Mahout algorithm, simulation results show that the mechanism reduces the overhead of controller, and has better network throughout.
2017, 39(4): 832-839.
doi: 10.11999/JEIT160526
Abstract:
To deal with the high resolution latencies in current existing mapping system, a hierarchical mapping system is proposed based on active degree. In the system, the mappings between the identifiers and locators are divided into three levels: active level, neutral level, and constant level. Based on these, a three tiers system architecture for mapping entries storing and resolving is designed. Stored mapping entries in different levels vary with the different active degrees of the remote communication terminal,and flow from one level to another. In order to minimize the mapping resolution latency, the construction model is proposed, which models the system construction process as a Markov Decision Process (MDP). Moreover, a Markov decision construction algorithm is proposed, which improves reinforcement learning to get the global optimal or near-optimal construction strategy. The simulation results show that the system has low resolve latency and good adaptability for network topology dynamic changes.
To deal with the high resolution latencies in current existing mapping system, a hierarchical mapping system is proposed based on active degree. In the system, the mappings between the identifiers and locators are divided into three levels: active level, neutral level, and constant level. Based on these, a three tiers system architecture for mapping entries storing and resolving is designed. Stored mapping entries in different levels vary with the different active degrees of the remote communication terminal,and flow from one level to another. In order to minimize the mapping resolution latency, the construction model is proposed, which models the system construction process as a Markov Decision Process (MDP). Moreover, a Markov decision construction algorithm is proposed, which improves reinforcement learning to get the global optimal or near-optimal construction strategy. The simulation results show that the system has low resolve latency and good adaptability for network topology dynamic changes.
2017, 39(4): 840-846.
doi: 10.11999/JEIT160539
Abstract:
Full duplex communication technology can improve the link capacity and spectrum utilization, which brings great changes to the existing Wireless Local Area Network (WLAN). To Solve the compatibility issues in the process of half-duplex WLAN evolution to full-duplex WLAN, a HYbrid-Duplex MAC protocol (HYD-MAC) is proposed. According to the application scenarios and the full-duplex capability of the stations, the Request To Send/Clear To Send (RTS/CTS) frames are expanded and HYD-MAC can adaptively choose one from the four duplex modes, which are the synchronous full duplex, the asynchronous full duplex, the conditional half-duplex and half-duplex. The link establishment process of HYD-MAC protocol in four transmission modes are presented, and the network performance such as the network saturation throughput and medium access delay of the proposed protocol are analyzed. The results show that the HYD-MAC protocol can satisfy the communication requirements of the full-duplex network, half-duplex network and hybrid-duplex network at the same time by sacrificing little throughput and delay performance.
Full duplex communication technology can improve the link capacity and spectrum utilization, which brings great changes to the existing Wireless Local Area Network (WLAN). To Solve the compatibility issues in the process of half-duplex WLAN evolution to full-duplex WLAN, a HYbrid-Duplex MAC protocol (HYD-MAC) is proposed. According to the application scenarios and the full-duplex capability of the stations, the Request To Send/Clear To Send (RTS/CTS) frames are expanded and HYD-MAC can adaptively choose one from the four duplex modes, which are the synchronous full duplex, the asynchronous full duplex, the conditional half-duplex and half-duplex. The link establishment process of HYD-MAC protocol in four transmission modes are presented, and the network performance such as the network saturation throughput and medium access delay of the proposed protocol are analyzed. The results show that the HYD-MAC protocol can satisfy the communication requirements of the full-duplex network, half-duplex network and hybrid-duplex network at the same time by sacrificing little throughput and delay performance.
2017, 39(4): 847-853.
doi: 10.11999/JEIT160581
Abstract:
The in-phase (I) and quadrature (Q) imbalance, carrier leakage and in-band distortion may lead to severe performance degradation, especially in direct-conversion radio architecture. An adaptive pre-distortion scheme is proposed to calibrate I/Q imbalance, carrier leakage and in-band distortion. In the proposed scheme, the inverse filter of each branch is estimated directly which avoids redundant computation. Moreover, an iterative scheme is proposed to track the parameters variation due to temperature in an online manner. By complexity analysis, the proposed scheme is efficient. The effectiveness of the scheme is testified by simulation.
The in-phase (I) and quadrature (Q) imbalance, carrier leakage and in-band distortion may lead to severe performance degradation, especially in direct-conversion radio architecture. An adaptive pre-distortion scheme is proposed to calibrate I/Q imbalance, carrier leakage and in-band distortion. In the proposed scheme, the inverse filter of each branch is estimated directly which avoids redundant computation. Moreover, an iterative scheme is proposed to track the parameters variation due to temperature in an online manner. By complexity analysis, the proposed scheme is efficient. The effectiveness of the scheme is testified by simulation.
2017, 39(4): 854-859.
doi: 10.11999/JEIT160192
Abstract:
Poisson Point Process (PPP) is used to establish the distribution model of base stations in two-tier heterogeneous networks, and formulate the energy efficiency maximization problem based on the base station density and the traffic load. The influence of the base station density on the energy efficiency is analyzed to optimize the densities of the macro base stations and small base stations according to the traffic load. The optimal densities of the macro base stations and small base stations are deduced, which can optimize the energy efficiency under the constraint of the quality of service. Simulation results indicate that, under the constraint of the quality of service, reasonable deployment of macro and small stations can greatly improve the energy efficiency.
Poisson Point Process (PPP) is used to establish the distribution model of base stations in two-tier heterogeneous networks, and formulate the energy efficiency maximization problem based on the base station density and the traffic load. The influence of the base station density on the energy efficiency is analyzed to optimize the densities of the macro base stations and small base stations according to the traffic load. The optimal densities of the macro base stations and small base stations are deduced, which can optimize the energy efficiency under the constraint of the quality of service. Simulation results indicate that, under the constraint of the quality of service, reasonable deployment of macro and small stations can greatly improve the energy efficiency.
2017, 39(4): 860-865.
doi: 10.11999/JEIT160461
Abstract:
For a multicarrier full-duplex-relay secure communication system with imperfect loop-interference cancelations, an optimal power allocation strategy with the statistical delay Quality-of-Service (QoS) guarantees is proposed, by using the delay QoS constraint with the concept of the secure effective capacity. Considering the statistical delay QoS guarantees and aiming to maximize the secure effective capacity, an optimization problem is formulated with the constraints of powers of both the whole system and the loop-interference of full-duplex relay. Then, in order to study the power allocation strategy, the original optimization problem is simplified by Taylor approximation and the problem is solved based on Lagrangian dual method and Karush-Kuhn-Tucker conditions. Therefore, the optimal solution is obtained by the sub-gradient iterative algorithm and simulation results are presented. The results show that the proposed optimal power allocation strategy can not only get the largest secure effective capacity but also satisfy the upper layer delay QoS requirement.
For a multicarrier full-duplex-relay secure communication system with imperfect loop-interference cancelations, an optimal power allocation strategy with the statistical delay Quality-of-Service (QoS) guarantees is proposed, by using the delay QoS constraint with the concept of the secure effective capacity. Considering the statistical delay QoS guarantees and aiming to maximize the secure effective capacity, an optimization problem is formulated with the constraints of powers of both the whole system and the loop-interference of full-duplex relay. Then, in order to study the power allocation strategy, the original optimization problem is simplified by Taylor approximation and the problem is solved based on Lagrangian dual method and Karush-Kuhn-Tucker conditions. Therefore, the optimal solution is obtained by the sub-gradient iterative algorithm and simulation results are presented. The results show that the proposed optimal power allocation strategy can not only get the largest secure effective capacity but also satisfy the upper layer delay QoS requirement.
2017, 39(4): 866-872.
doi: 10.11999/JEIT160582
Abstract:
Joint optimization of cooperative spectrum detection and resource allocation based on the service profile is investigated to enhance end-to-end transmission performance of the secondary users by selecting the sensing nodes. At first, the adaptive cooperation thresholds are adjusted according to the weight of available detection index based on the global detection metrics in the last round. And the optimal cooperative mode can be selected to maximize the available sensing region. The idle channels are managed depend on the stability and the available bandwidth metrics for different secondary users. Then, the secondary users can be divided into two categories based on the requested rates, delay sensitive services and reliability sensitive services. The idle channels for the secondary users with different quality of service demands are selected depend on the service profile for enhancing end-to-end transmission performance. Simulation results show that the proposed algorithm can expand the available sensing region through adjusting the global detection metrics adaptively in Rayleigh fading channel and increase the resource utility by decreasing the outage of delay sensitive services.
Joint optimization of cooperative spectrum detection and resource allocation based on the service profile is investigated to enhance end-to-end transmission performance of the secondary users by selecting the sensing nodes. At first, the adaptive cooperation thresholds are adjusted according to the weight of available detection index based on the global detection metrics in the last round. And the optimal cooperative mode can be selected to maximize the available sensing region. The idle channels are managed depend on the stability and the available bandwidth metrics for different secondary users. Then, the secondary users can be divided into two categories based on the requested rates, delay sensitive services and reliability sensitive services. The idle channels for the secondary users with different quality of service demands are selected depend on the service profile for enhancing end-to-end transmission performance. Simulation results show that the proposed algorithm can expand the available sensing region through adjusting the global detection metrics adaptively in Rayleigh fading channel and increase the resource utility by decreasing the outage of delay sensitive services.
2017, 39(4): 873-880.
doi: 10.11999/JEIT160563
Abstract:
A low complexity iterative majority-logic decoding algorithm is presented. For the presented algorithm, binary decoding messages are involved in the message passing, processing and updating process. Instead of computing the extrinsic information, the presented algorithm computes the reliability measure based on syndrome states (correct or error) in the Tanner graph. Compared with several existing iterative majority-logic decoding algorithms, the presented algorithm does not require the information scaling and hence can avoid the corresponding real multiplication operations. This leads to very low decoding complexity. Furthermore, a special quantization is combined with the presented algorithm. The optimization method is also given based on the discrete Density Evolution (DE). Simulation results show that, compared with the original algorithm, the presented algorithm can achieve about 0.3~0.4 dB performance gain over the Additive White Gaussian Noise (AWGN) channel. Moreover, all the decoding messages exchanged among the nodes are binary-based, which makes the presented algorithm convenient for the hardware implementations.
A low complexity iterative majority-logic decoding algorithm is presented. For the presented algorithm, binary decoding messages are involved in the message passing, processing and updating process. Instead of computing the extrinsic information, the presented algorithm computes the reliability measure based on syndrome states (correct or error) in the Tanner graph. Compared with several existing iterative majority-logic decoding algorithms, the presented algorithm does not require the information scaling and hence can avoid the corresponding real multiplication operations. This leads to very low decoding complexity. Furthermore, a special quantization is combined with the presented algorithm. The optimization method is also given based on the discrete Density Evolution (DE). Simulation results show that, compared with the original algorithm, the presented algorithm can achieve about 0.3~0.4 dB performance gain over the Additive White Gaussian Noise (AWGN) channel. Moreover, all the decoding messages exchanged among the nodes are binary-based, which makes the presented algorithm convenient for the hardware implementations.
2017, 39(4): 881-886.
doi: 10.11999/JEIT160662
Abstract:
Heterogeneous signcryption scheme can ensure the confidentiality and the authentication?for data communication between different security domains. Some existing heterogeneous signcryption schemes are analyzed to be secure in the random oracle model. Based on this problem, an Identity-Based Cryptography (IBC) to Public Key Infrastructure (PKI) signcryption scheme is proposed. The proposed scheme has the confidentiality and the unforgeability under the Computational Diffie-Hellman (CDH) problem and the Decisional Bilinear Diffie-HellmanB (DBDH) problem. Through the theoretical and experimental analysis, both the computational costs and the communication overheads of the proposed scheme are more efficient.
Heterogeneous signcryption scheme can ensure the confidentiality and the authentication?for data communication between different security domains. Some existing heterogeneous signcryption schemes are analyzed to be secure in the random oracle model. Based on this problem, an Identity-Based Cryptography (IBC) to Public Key Infrastructure (PKI) signcryption scheme is proposed. The proposed scheme has the confidentiality and the unforgeability under the Computational Diffie-Hellman (CDH) problem and the Decisional Bilinear Diffie-HellmanB (DBDH) problem. Through the theoretical and experimental analysis, both the computational costs and the communication overheads of the proposed scheme are more efficient.
2017, 39(4): 887-892.
doi: 10.11999/JEIT160632
Abstract:
The change of the atmospheric turbulence affects the transmission of microwave. In order to study the impact of turbulence on the microwave transmission in rail transit tunnel environment, this paper combines the motion characteristics of the piston wind with the calculation method of atmospheric refractive index structure parameter. With investigation into the influences of tunnel environmental temperature, length of tunnel, blockage ratio, and piston wind speed on the atmospheric refractive index structure parameter, an atmospheric refractive index structure parameter model is established in the single-shaft rail transit tunnel environment. In this paper, the distribution of the atmospheric refractive index structure constant in rail transit tunnel environment is analyzed, and the change of atmospheric turbulence refractive index structure parameter in case of the train through the single-shaft tunnel with that of no single-shaft tunnel is compared based on the actual tunnel temperature scene. The model provides a theoretical reference for the study of radio refractive index structure constant in rail transit tunnel environment.
The change of the atmospheric turbulence affects the transmission of microwave. In order to study the impact of turbulence on the microwave transmission in rail transit tunnel environment, this paper combines the motion characteristics of the piston wind with the calculation method of atmospheric refractive index structure parameter. With investigation into the influences of tunnel environmental temperature, length of tunnel, blockage ratio, and piston wind speed on the atmospheric refractive index structure parameter, an atmospheric refractive index structure parameter model is established in the single-shaft rail transit tunnel environment. In this paper, the distribution of the atmospheric refractive index structure constant in rail transit tunnel environment is analyzed, and the change of atmospheric turbulence refractive index structure parameter in case of the train through the single-shaft tunnel with that of no single-shaft tunnel is compared based on the actual tunnel temperature scene. The model provides a theoretical reference for the study of radio refractive index structure constant in rail transit tunnel environment.
2017, 39(4): 893-900.
doi: 10.11999/JEIT160579
Abstract:
Based on the absolute and exponential monostable potential, a generalized exponential type single-well potential function is constructed. The laws for the resonant output of monostable system governed byl andb,D of Levy noise are explored under different characteristic index and symmetry parameter of Levy noise. The results show that the stochastic resonance phenomenon can be induced by adjusting the exponential type parametersl and b under any or of Levy noise. The larger b (or l) is, the wider parameter interval of l (or b) can induce SR (Stochastic Resonance). The ESR (Exponential SR) system can solve the problem that the traditional system can not achieve SR due to the improper selection of parameters. The interval of D of Levy noise, which induces good stochastic resonance, does not change with or. At last, the proposed exponential type monostable is applicated to detect bearing fault signals, which achieves better performance compared with the traditional bisabled system.
Based on the absolute and exponential monostable potential, a generalized exponential type single-well potential function is constructed. The laws for the resonant output of monostable system governed byl andb,D of Levy noise are explored under different characteristic index and symmetry parameter of Levy noise. The results show that the stochastic resonance phenomenon can be induced by adjusting the exponential type parametersl and b under any or of Levy noise. The larger b (or l) is, the wider parameter interval of l (or b) can induce SR (Stochastic Resonance). The ESR (Exponential SR) system can solve the problem that the traditional system can not achieve SR due to the improper selection of parameters. The interval of D of Levy noise, which induces good stochastic resonance, does not change with or. At last, the proposed exponential type monostable is applicated to detect bearing fault signals, which achieves better performance compared with the traditional bisabled system.
2017, 39(4): 901-907.
doi: 10.11999/JEIT160575
Abstract:
Blind recognition of cyclic code based on check matrix match algorithm is proposed in order to solve the blind identification issue of low fault tolerance rate and large intercepted data. First, the corresponding check matrix of all of the code lengthn and the factor ofxn-1 is regarded as candidate check matrix. Second, a matrix is filled with intercepted bit stream received from binary symmetric channel. It is multipled with candidate check matrix, and whether check matrix in code length and synchronization exists or not is determined, and then code length, synchronization and generate polynomial can be estimated. The simulation results show that if the proposed method is applied to (63, 51) cyclic code, when the probability of correct recognition of code length, synchronization and generate polynomial requires 80%, the maximum bit error rate is4.610-2, 4.610-2 and1.610-2 respectively.
Blind recognition of cyclic code based on check matrix match algorithm is proposed in order to solve the blind identification issue of low fault tolerance rate and large intercepted data. First, the corresponding check matrix of all of the code lengthn and the factor ofxn-1 is regarded as candidate check matrix. Second, a matrix is filled with intercepted bit stream received from binary symmetric channel. It is multipled with candidate check matrix, and whether check matrix in code length and synchronization exists or not is determined, and then code length, synchronization and generate polynomial can be estimated. The simulation results show that if the proposed method is applied to (63, 51) cyclic code, when the probability of correct recognition of code length, synchronization and generate polynomial requires 80%, the maximum bit error rate is4.610-2, 4.610-2 and1.610-2 respectively.
2017, 39(4): 908-914.
doi: 10.11999/JEIT160578
Abstract:
To solve the problem of blind source separation for chaotic signals, an improved blind separation algorithm is proposed. A function is constructed by signal separation evaluation index, which adaptively updates the step size and momentum factor, then substitutes the obtained variable step-size function into blind source separation algorithm and introduces the adaptive momentum item. Different from most algorithms which can not estimate the mixing matrix, the proposed algorithm estimates iteratively the mixing matrix by the variable step function, then the global matrix and the estimated evaluation can be obtained on which step and momentum factor are iteratively updated. Finally, the separation matrix is obtained. Simulations show that the algorithm is effective to adjust the step and momentum factor based on the estimated evaluation index constructor. In stationary and non-stationary environments, the algorithm has faster convergence speed and lower steady error for separating the mixed chaotic signals. When mixing color noise, the proposed algorithm is better than that of the traditional algorithm, which shows that the proposed algorithm has certain application value to the chaotic signal blind source separation processing.
To solve the problem of blind source separation for chaotic signals, an improved blind separation algorithm is proposed. A function is constructed by signal separation evaluation index, which adaptively updates the step size and momentum factor, then substitutes the obtained variable step-size function into blind source separation algorithm and introduces the adaptive momentum item. Different from most algorithms which can not estimate the mixing matrix, the proposed algorithm estimates iteratively the mixing matrix by the variable step function, then the global matrix and the estimated evaluation can be obtained on which step and momentum factor are iteratively updated. Finally, the separation matrix is obtained. Simulations show that the algorithm is effective to adjust the step and momentum factor based on the estimated evaluation index constructor. In stationary and non-stationary environments, the algorithm has faster convergence speed and lower steady error for separating the mixed chaotic signals. When mixing color noise, the proposed algorithm is better than that of the traditional algorithm, which shows that the proposed algorithm has certain application value to the chaotic signal blind source separation processing.
2017, 39(4): 915-921.
doi: 10.11999/JEIT160559
Abstract:
Semi-supervised learning algorithm based on non-negative low rank and sparse graph can not describe the structures of the data exactly. Therefore, an improved algorithm which integrates smoothed low rank representation and weighted sparsity constraint is proposed. The low rank term and sparse term of the classical algorithm are improved by this algorithm respectively, and the global subspace structure and the locally linear structure can be captured exactly. When building the objective function, the logarithm determinant function instead of the nuclear norm is used to approximate the rank function smoothly. Meanwhile, the shape interaction information and the label information of labeled samples is used to build the weighted sparsity constraint regularization term. Then, the objective function is solved by a linearized alternating direction method with adaptive penalty and the graph construction is restructured by an available post-processing method. Finally, a semi-supervised classification framework based on local and global consistency is used to finish the learning task. The experimental results on ORL, Extended Yale B and USPS database show that the improved algorithm improves the accuracy of semi-supervised learning.
Semi-supervised learning algorithm based on non-negative low rank and sparse graph can not describe the structures of the data exactly. Therefore, an improved algorithm which integrates smoothed low rank representation and weighted sparsity constraint is proposed. The low rank term and sparse term of the classical algorithm are improved by this algorithm respectively, and the global subspace structure and the locally linear structure can be captured exactly. When building the objective function, the logarithm determinant function instead of the nuclear norm is used to approximate the rank function smoothly. Meanwhile, the shape interaction information and the label information of labeled samples is used to build the weighted sparsity constraint regularization term. Then, the objective function is solved by a linearized alternating direction method with adaptive penalty and the graph construction is restructured by an available post-processing method. Finally, a semi-supervised classification framework based on local and global consistency is used to finish the learning task. The experimental results on ORL, Extended Yale B and USPS database show that the improved algorithm improves the accuracy of semi-supervised learning.
2017, 39(4): 922-929.
doi: 10.11999/JEIT161070
Abstract:
Human vision pays more attention to the interesting region than other areas. A method based on salient region detection for layered difference representation of 2D histogram is proposed to achieve visual enhancement. The algorithm detects the salient region by salient filtering and cuts salient region with a threshold for visual perception firstly. Then, 2D histogram is calculated for related region in original image of salient region, and statistical information in different layers is converted to layer 2 according to the inner relationship of each layer. Following a difference vector is gained though solving a constrained optimization problem of layered difference representation at a specified layer. To preserve the character of non-salient region, an origin difference vector is defined. Finally, output image is reconstructed by a transformation function, which is the result of two difference vectors for salient region and non-salient region. Experimental results show that the proposed method enhances contrast and details in salient region efficiently while protecting non-salient region in origin image. The objective evaluation parameters in three group experiments illustrate that the proposed algorithm can get better scores in protecting global mean lighting in non-salient region, increasing PSNR and HSNR of the whole image compared to other five algorithms. The EME value of images enhanced by the proposed method is moderate. The objective evaluation parameters are consistent with the subject observation, and it demonstrates the proposed method can achieve visual enhancement effectively.
Human vision pays more attention to the interesting region than other areas. A method based on salient region detection for layered difference representation of 2D histogram is proposed to achieve visual enhancement. The algorithm detects the salient region by salient filtering and cuts salient region with a threshold for visual perception firstly. Then, 2D histogram is calculated for related region in original image of salient region, and statistical information in different layers is converted to layer 2 according to the inner relationship of each layer. Following a difference vector is gained though solving a constrained optimization problem of layered difference representation at a specified layer. To preserve the character of non-salient region, an origin difference vector is defined. Finally, output image is reconstructed by a transformation function, which is the result of two difference vectors for salient region and non-salient region. Experimental results show that the proposed method enhances contrast and details in salient region efficiently while protecting non-salient region in origin image. The objective evaluation parameters in three group experiments illustrate that the proposed algorithm can get better scores in protecting global mean lighting in non-salient region, increasing PSNR and HSNR of the whole image compared to other five algorithms. The EME value of images enhanced by the proposed method is moderate. The objective evaluation parameters are consistent with the subject observation, and it demonstrates the proposed method can achieve visual enhancement effectively.
2017, 39(4): 930-937.
doi: 10.11999/JEIT160543
Abstract:
Recurrent Neural Networks (RNN) are widely used for acoustic modeling in Automatic Speech Recognition (ASR). Although RNNs show many advantages over traditional acoustic modeling methods, the inherent higher computational cost limits its usage, especially in real-time applications. Noticing that the features used by RNNs usually have relatively long acoustic contexts, it is possible to lower the computational complexity of both posterior calculation and token passing process with overlapped information. This paper introduces a novel decoder structure that drops the overlapped acoustic frames regularly, which leads to a significant computational cost reduction in the decoding process. Especially, the new approach can directly use the original RNNs with minor modifications on the HMM topology, which makes it flexible. In experiments on conversation telephone speech datasets, this approach achieves 2 to 4 times speedup with little relative accuracy reduction.
Recurrent Neural Networks (RNN) are widely used for acoustic modeling in Automatic Speech Recognition (ASR). Although RNNs show many advantages over traditional acoustic modeling methods, the inherent higher computational cost limits its usage, especially in real-time applications. Noticing that the features used by RNNs usually have relatively long acoustic contexts, it is possible to lower the computational complexity of both posterior calculation and token passing process with overlapped information. This paper introduces a novel decoder structure that drops the overlapped acoustic frames regularly, which leads to a significant computational cost reduction in the decoding process. Especially, the new approach can directly use the original RNNs with minor modifications on the HMM topology, which makes it flexible. In experiments on conversation telephone speech datasets, this approach achieves 2 to 4 times speedup with little relative accuracy reduction.
2017, 39(4): 938-944.
doi: 10.11999/JEIT160549
Abstract:
Phase-Coded Orthogonal Frequency Division Multiplexing (PC-OFDM) radar has drawn wide attention in high resolution radar application. This kind of radar signal transmits orthogonal sub-carriers phase-modulated by specific sequences and has range and Doppler high resolution at the same time. Considering its sensitivity to Doppler offset, this paper derives the pulse compression method of PC-OFDM radar, and based on Cyclic Prefix (CP), a Doppler offset estimation and compensation algorithm is proposed. Several simulations verify the effectiveness of the method in improving High Resolution Range Profile (HRRP) with Doppler offset.
Phase-Coded Orthogonal Frequency Division Multiplexing (PC-OFDM) radar has drawn wide attention in high resolution radar application. This kind of radar signal transmits orthogonal sub-carriers phase-modulated by specific sequences and has range and Doppler high resolution at the same time. Considering its sensitivity to Doppler offset, this paper derives the pulse compression method of PC-OFDM radar, and based on Cyclic Prefix (CP), a Doppler offset estimation and compensation algorithm is proposed. Several simulations verify the effectiveness of the method in improving High Resolution Range Profile (HRRP) with Doppler offset.
2017, 39(4): 945-952.
doi: 10.11999/JEIT160576
Abstract:
To deal with the drawbacks of low reconstruction accuracy and poor suppression performance of clutter with short Coherent Integration Time (CIT) for traditional time domain cancellation method, a method of sea clutter reconstruction and suppression based on Compressed Sensing (CS) is proposed. Firstly, echoes model of Over-The-Horizon Radar (OTHR) with short CIT is established, and the rationality using CS to reconstruct sea clutter is analyzed. Secondly, the proposed method is elaborated in detail from sparse representation of echoes model based on redundant sinusoidal?dictionary, representation of sensing matrix based on dimensionality reduction dictionary and sea clutter reconstruction and suppression based on the modified Orthogonal Matching Pursuit (OMP) algorithm respectively. Finally, the computer simulation analysis and measured data verification are accomplished. The results indicate that the clutter suppression performance and engineering application value of the proposed method are better than the traditional time domain cancellation method and subspace methods in the condition of short CIT.
To deal with the drawbacks of low reconstruction accuracy and poor suppression performance of clutter with short Coherent Integration Time (CIT) for traditional time domain cancellation method, a method of sea clutter reconstruction and suppression based on Compressed Sensing (CS) is proposed. Firstly, echoes model of Over-The-Horizon Radar (OTHR) with short CIT is established, and the rationality using CS to reconstruct sea clutter is analyzed. Secondly, the proposed method is elaborated in detail from sparse representation of echoes model based on redundant sinusoidal?dictionary, representation of sensing matrix based on dimensionality reduction dictionary and sea clutter reconstruction and suppression based on the modified Orthogonal Matching Pursuit (OMP) algorithm respectively. Finally, the computer simulation analysis and measured data verification are accomplished. The results indicate that the clutter suppression performance and engineering application value of the proposed method are better than the traditional time domain cancellation method and subspace methods in the condition of short CIT.
2017, 39(4): 953-959.
doi: 10.11999/JEIT160597
Abstract:
Wideband MIMO radar exhibits great potential in achieving the goals of high resolution imaging, but it also suffers from the electromagnetic signal congestion and interference problems, especially for radar that works in Very High Frequency (VHF) and Ultra High Frequency (UHF) band. To solve this problem, a cyclic iterative method for designing orthogonal sparse frequency waveforms is proposed. Firstly, the desired spectrum is used as an auxiliary variable, and a new objective function is constructed based on both the mean square error of the spectrum of transmitting waveform with the desired one and the integration side-lobe levels. The optimization model is established under the constraint that the waveform is constant envelope meanwhile the spectrum magnitude lies between the pre-established upper and lower bounds. Then, under the framework of cyclic iterative algorithm, fast Fourier transform and spectral decomposition techniques are used to improve computational efficiency. Simulation results show that the proposed method has good performance in designing orthogonal sparse frequency waveforms with low auto-correlation and cross-correlation side lobes.
Wideband MIMO radar exhibits great potential in achieving the goals of high resolution imaging, but it also suffers from the electromagnetic signal congestion and interference problems, especially for radar that works in Very High Frequency (VHF) and Ultra High Frequency (UHF) band. To solve this problem, a cyclic iterative method for designing orthogonal sparse frequency waveforms is proposed. Firstly, the desired spectrum is used as an auxiliary variable, and a new objective function is constructed based on both the mean square error of the spectrum of transmitting waveform with the desired one and the integration side-lobe levels. The optimization model is established under the constraint that the waveform is constant envelope meanwhile the spectrum magnitude lies between the pre-established upper and lower bounds. Then, under the framework of cyclic iterative algorithm, fast Fourier transform and spectral decomposition techniques are used to improve computational efficiency. Simulation results show that the proposed method has good performance in designing orthogonal sparse frequency waveforms with low auto-correlation and cross-correlation side lobes.
2017, 39(4): 960-967.
doi: 10.11999/JEIT160595
Abstract:
For the cancellation of Doppler-spreading clutters of airborne passive radar, firstly, Block RDLMS in beam domain is proposed in order to reduce computational load. In the proposed algorithm, the order of cancellation in Doppler dimension is reduced by beamforming, and the iteration of adaptive processing is reduced by data segmenting, while FFT can be employed. It is verified that the proposed algorithm can reduce computational load substantially, which is valuable for real-time cancellation. Secondly, the improved algorithm is developed based on the idea of proportionate adaptation. Because the same step is assigned to all weights in the proposed algorithm, there will be relatively more residual clutter when the Clutter to Noise Ratio (CNR) is higher. The improved algorithm will assign different steps proportional to the logarithm of corresponding weights. Simulations show that by using the improved algorithm the residual is reduced 1.3dB and the performance of clutter cancellation approaches the ideal case.
For the cancellation of Doppler-spreading clutters of airborne passive radar, firstly, Block RDLMS in beam domain is proposed in order to reduce computational load. In the proposed algorithm, the order of cancellation in Doppler dimension is reduced by beamforming, and the iteration of adaptive processing is reduced by data segmenting, while FFT can be employed. It is verified that the proposed algorithm can reduce computational load substantially, which is valuable for real-time cancellation. Secondly, the improved algorithm is developed based on the idea of proportionate adaptation. Because the same step is assigned to all weights in the proposed algorithm, there will be relatively more residual clutter when the Clutter to Noise Ratio (CNR) is higher. The improved algorithm will assign different steps proportional to the logarithm of corresponding weights. Simulations show that by using the improved algorithm the residual is reduced 1.3dB and the performance of clutter cancellation approaches the ideal case.
2017, 39(4): 968-972.
doi: 10.11999/JEIT160650
Abstract:
It is hard to select a probability distribution model for very high resolution SAR images. This paper presents a novel method for the automatic detecting of cars from VHR SAR image without the probability distribution model. The proposed method starts with searching bright regions and dark regions by the gray feature. Subsequently, the fuzzy membership is employed to extract the semantic features of car from bright regions and dark regions. The potential scattering surface and shadow are matched and calculated with the spatial semantic relationship. Finally, the cars are selected from the matching. The efficiency of the proposed method is demonstrated by experiment which shows it still has high detection rate without the probability distribution model.
It is hard to select a probability distribution model for very high resolution SAR images. This paper presents a novel method for the automatic detecting of cars from VHR SAR image without the probability distribution model. The proposed method starts with searching bright regions and dark regions by the gray feature. Subsequently, the fuzzy membership is employed to extract the semantic features of car from bright regions and dark regions. The potential scattering surface and shadow are matched and calculated with the spatial semantic relationship. Finally, the cars are selected from the matching. The efficiency of the proposed method is demonstrated by experiment which shows it still has high detection rate without the probability distribution model.
2017, 39(4): 973-980.
doi: 10.11999/JEIT160633
Abstract:
In order to destroy the detection performance of Constant False Alarm Rate (CFAR) detection system in SAR images, a new multiple false targets method based on intermittent sampling and periodic repeater jamming is proposed. The new method can solve the disadvantages of less false targets and lower jamming power utilization rate from traditional intermittent sampling direct repeater jamming. Then, the jamming performance of multiple false targets against Bi-Parameter CFAR (BP-CFAR) detection in SAR images is analyzed in details. The research result shows that multiple false targets can raise the detection threshold and weaken the detection performance of BP-CFAR. Finally, according to the characteristic of CFAR detection window, the application model of multiple jammers is established. The model can produce 2-D netted multiple false targets, so it can effectively guarantee enough false targets in detection window. Theoretical analysis and computer simulation justify the validity and the efficiency.
In order to destroy the detection performance of Constant False Alarm Rate (CFAR) detection system in SAR images, a new multiple false targets method based on intermittent sampling and periodic repeater jamming is proposed. The new method can solve the disadvantages of less false targets and lower jamming power utilization rate from traditional intermittent sampling direct repeater jamming. Then, the jamming performance of multiple false targets against Bi-Parameter CFAR (BP-CFAR) detection in SAR images is analyzed in details. The research result shows that multiple false targets can raise the detection threshold and weaken the detection performance of BP-CFAR. Finally, according to the characteristic of CFAR detection window, the application model of multiple jammers is established. The model can produce 2-D netted multiple false targets, so it can effectively guarantee enough false targets in detection window. Theoretical analysis and computer simulation justify the validity and the efficiency.
2017, 39(4): 981-988.
doi: 10.11999/JEIT160604
Abstract:
The positioning method based on single beacon ranging is the further development of the underwater acoustic positioning technology. In this paper, the location of single beacon ranging based on the straight path is studied. On one hand, the conventional direct reduction method is not applicable to the straight path. On the other hand, when the beacon is in the straight line or the extension of the straight line, the linear iterative algorithm can not locate the carrier. When the coefficient matrix is almost singular or bad condition, the error of the solution will obviously increase. In this paper, an improved algorithm is proposed to solve the problems existing in the solving method, which can overcome the influence of singular or bad condition of coefficient matrix. Simulation results show that the localization accuracy of the proposed algorithm is similar to that of the Gauss Newton method in most cases. This algorithm can also realize the positioning calculation when the beacon is in the straight line or the extension of the straight line. This algorithm can obviously improve the positioning accuracy where the positioning accuracy is not high when using the linear iterative algorithm. The effectiveness of the proposed algorithm is verified by experiments on the sea.
The positioning method based on single beacon ranging is the further development of the underwater acoustic positioning technology. In this paper, the location of single beacon ranging based on the straight path is studied. On one hand, the conventional direct reduction method is not applicable to the straight path. On the other hand, when the beacon is in the straight line or the extension of the straight line, the linear iterative algorithm can not locate the carrier. When the coefficient matrix is almost singular or bad condition, the error of the solution will obviously increase. In this paper, an improved algorithm is proposed to solve the problems existing in the solving method, which can overcome the influence of singular or bad condition of coefficient matrix. Simulation results show that the localization accuracy of the proposed algorithm is similar to that of the Gauss Newton method in most cases. This algorithm can also realize the positioning calculation when the beacon is in the straight line or the extension of the straight line. This algorithm can obviously improve the positioning accuracy where the positioning accuracy is not high when using the linear iterative algorithm. The effectiveness of the proposed algorithm is verified by experiments on the sea.
2017, 39(4): 989-996.
doi: 10.11999/JEIT160492
Abstract:
This paper proposes the performance analysis model of switched flight control systems driven by the digital upsets when electronics devices are subject to electromagnetic environments. Hidden Markov Model (HMM) is used to describe the characteristics of digital upsets and construct the model based on the theory of the electromagnetic interferences. The parameter estimation algorithms of the traditional training method for HMM are sensitive to initial parameters, therefore, this paper proposes a fast initial parameter selection strategy which can also accelerate the training processes. At the end, HMM-based electromagnetic interferences are fed to the performance observation platform for the distributed flight control systems. This paper also compares multiple performance degradation results under different electromagnetic fields from theory and simulation perspectives. Simulation results demonstrate HMM model can characterize the digital electromagnetic upsets more accurately compared to the existed digital electromagnetic models, and simulation results of the corresponding performance degradation are consistent with the theoretic results.
This paper proposes the performance analysis model of switched flight control systems driven by the digital upsets when electronics devices are subject to electromagnetic environments. Hidden Markov Model (HMM) is used to describe the characteristics of digital upsets and construct the model based on the theory of the electromagnetic interferences. The parameter estimation algorithms of the traditional training method for HMM are sensitive to initial parameters, therefore, this paper proposes a fast initial parameter selection strategy which can also accelerate the training processes. At the end, HMM-based electromagnetic interferences are fed to the performance observation platform for the distributed flight control systems. This paper also compares multiple performance degradation results under different electromagnetic fields from theory and simulation perspectives. Simulation results demonstrate HMM model can characterize the digital electromagnetic upsets more accurately compared to the existed digital electromagnetic models, and simulation results of the corresponding performance degradation are consistent with the theoretic results.
2017, 39(4): 997-1001.
doi: 10.11999/JEIT160553
Abstract:
To address the power control problem for Device-to-Device (D2D) communication in a cellular network to improve the cellular Energy Efficiency (EE), a weighted cellular EE problem is proposed and it is solved by using a game-theoretic learning approach. Specifically, by proposing a lower bound instead of the original optimization objective, it is proved that an exact potential game applies to the power control problem and its best Nash Equilibrium (NE) is the optimal solution of the lower bound. Then, the algorithm with low-complexity and fast convergence is designed. Finally, numerical results verify the effectiveness of the proposed scheme.
To address the power control problem for Device-to-Device (D2D) communication in a cellular network to improve the cellular Energy Efficiency (EE), a weighted cellular EE problem is proposed and it is solved by using a game-theoretic learning approach. Specifically, by proposing a lower bound instead of the original optimization objective, it is proved that an exact potential game applies to the power control problem and its best Nash Equilibrium (NE) is the optimal solution of the lower bound. Then, the algorithm with low-complexity and fast convergence is designed. Finally, numerical results verify the effectiveness of the proposed scheme.
2017, 39(4): 1002-1006.
doi: 10.11999/JEIT160593
Abstract:
In the cognitive D2D (Device-to-Device) full-duplex communication network, there is the interference problem when D2D UsErs (DUEs) share the same spectrum with Cellular UsErs (CUEs) in the uplink, a power allocation scheme is proposed to maximize transmission rate. In the scheme, firstly, the cognitive D2D full-duplex communication model is described. Meanwhile, the uplink interference and the corresponding transmission rates at the base station and DUEs are analyzed. Secondly, a power allocation algorithm is proposed to maximize the DUEs transmission rate in cognitive radio system. Simulation results show that the proposed algorithm can improve the spectrum efficiency and the overall transmission rate of the system in the uplink of the cognitive D2D full-duplex communication network.
In the cognitive D2D (Device-to-Device) full-duplex communication network, there is the interference problem when D2D UsErs (DUEs) share the same spectrum with Cellular UsErs (CUEs) in the uplink, a power allocation scheme is proposed to maximize transmission rate. In the scheme, firstly, the cognitive D2D full-duplex communication model is described. Meanwhile, the uplink interference and the corresponding transmission rates at the base station and DUEs are analyzed. Secondly, a power allocation algorithm is proposed to maximize the DUEs transmission rate in cognitive radio system. Simulation results show that the proposed algorithm can improve the spectrum efficiency and the overall transmission rate of the system in the uplink of the cognitive D2D full-duplex communication network.
2017, 39(4): 1007-1011.
doi: 10.11999/JEIT160514
Abstract:
The existing address hopping methods need to design a new protocol of address exchanging and the scalability is usually limited. Also, its hopping cycle is difficult to make self-adaption. This paper proposes an address hopping method based on an improved Dynamic Host Configuration Protocol (DHCP). The number of hopping addresses is calculated by fitting and predicting network traffic which uses the auto regression integration moving average model. The hopping addresses are selected according to the address vacant time. The address lease time is adjusted dynamically according to the network anomaly which is detected by using the time series similarity measure algorithm based on dynamic time warping distance. Clients and application server are able to complete hopping communication based on the address mapping relationships. The proposed method can adjust hopping address and cycle dynamically without to modify the existing DHCP protocol, which not only increases attackers difficult of intercepting traffic and launching denial of service attack but also enhances the attackers overhead.
The existing address hopping methods need to design a new protocol of address exchanging and the scalability is usually limited. Also, its hopping cycle is difficult to make self-adaption. This paper proposes an address hopping method based on an improved Dynamic Host Configuration Protocol (DHCP). The number of hopping addresses is calculated by fitting and predicting network traffic which uses the auto regression integration moving average model. The hopping addresses are selected according to the address vacant time. The address lease time is adjusted dynamically according to the network anomaly which is detected by using the time series similarity measure algorithm based on dynamic time warping distance. Clients and application server are able to complete hopping communication based on the address mapping relationships. The proposed method can adjust hopping address and cycle dynamically without to modify the existing DHCP protocol, which not only increases attackers difficult of intercepting traffic and launching denial of service attack but also enhances the attackers overhead.
2017, 39(4): 1012-1016.
doi: 10.11999/JEIT160611
Abstract:
For the problem of cross-domain node sleep and network congestion caused by load transfer, this paper propose an energy-efficient policy based on collaborative sleep mechanism between cross-domain node with load transfer in WOBAN. By analyzing the current load of the Optical Network Unit (ONU) and the collaboration between ONU and wireless router, this paper applies maximum matching theory to determine sleep node and the destination of load transfer so as to reduce energy consumption on the basis of ensuring the connectivity and low latency of network. Simulation results show that the proposed algorithm can reduce the energy consumption of entire network without having a significant impact on the network packet delay.
For the problem of cross-domain node sleep and network congestion caused by load transfer, this paper propose an energy-efficient policy based on collaborative sleep mechanism between cross-domain node with load transfer in WOBAN. By analyzing the current load of the Optical Network Unit (ONU) and the collaboration between ONU and wireless router, this paper applies maximum matching theory to determine sleep node and the destination of load transfer so as to reduce energy consumption on the basis of ensuring the connectivity and low latency of network. Simulation results show that the proposed algorithm can reduce the energy consumption of entire network without having a significant impact on the network packet delay.