Email alert
2023 Vol. 45, No. 7
Display Method:
2023, 45(7): 2293-2310.
doi: 10.11999/JEIT221219
Abstract:
In the past decades, the scope of Internet of Things (IoT) is expanded continuously. With hundreds of billions of smart devices connect to IoT, huge challenges are arisen from several aspects such as device cost, connectivity capability, and power supplies. Fortunately, the new paradigm - passive IoT is coming which is one of the effective solutions for these challenges. Some related concepts are analyzed and the definition of passive IoT is proposed. For the first time, the four challenges faced by passive IoT, such as low energy density, low conversion efficiency, limited distance of backscatter communication, and difficulty in transmission of power and information simultaneous, are studied. The problems are analyzed in deeply and the research progress are surveyed. For the challenge of low energy density, the research progress is reviewed from three aspects: beamforming, antenna design for energy harvesting, and intelligent reflecting surface. For the challenge of low energy conversion efficiency, from receiver architecture optimization, waveform design, impedance matching, rectifier optimization. For the challenge of limited distance of backscatter communication, the research progress is reviewed from seven aspects: new modulation scheme, new frequency-shifted backscattering scheme, MIMO, new channel coding scheme, new signal detection method, intelligent reflecting surface enhancing, and semi-active mode. Considering the difficulty in transmission of power and information simultaneous, the research progress is reviewed from two aspects: optimization of the receiver architecture and the energy information compatible signal coding scheme. For each aspect, the advantages and disadvantages of various methods are analyzed and the future research directions are pointed out.
In the past decades, the scope of Internet of Things (IoT) is expanded continuously. With hundreds of billions of smart devices connect to IoT, huge challenges are arisen from several aspects such as device cost, connectivity capability, and power supplies. Fortunately, the new paradigm - passive IoT is coming which is one of the effective solutions for these challenges. Some related concepts are analyzed and the definition of passive IoT is proposed. For the first time, the four challenges faced by passive IoT, such as low energy density, low conversion efficiency, limited distance of backscatter communication, and difficulty in transmission of power and information simultaneous, are studied. The problems are analyzed in deeply and the research progress are surveyed. For the challenge of low energy density, the research progress is reviewed from three aspects: beamforming, antenna design for energy harvesting, and intelligent reflecting surface. For the challenge of low energy conversion efficiency, from receiver architecture optimization, waveform design, impedance matching, rectifier optimization. For the challenge of limited distance of backscatter communication, the research progress is reviewed from seven aspects: new modulation scheme, new frequency-shifted backscattering scheme, MIMO, new channel coding scheme, new signal detection method, intelligent reflecting surface enhancing, and semi-active mode. Considering the difficulty in transmission of power and information simultaneous, the research progress is reviewed from two aspects: optimization of the receiver architecture and the energy information compatible signal coding scheme. For each aspect, the advantages and disadvantages of various methods are analyzed and the future research directions are pointed out.
2023, 45(7): 2311-2316.
doi: 10.11999/JEIT221558
Abstract:
Recently, passive backscatter communication technology has attracted extensive attention as one key technology for green Internet of Things. In one typical backscatter communication system, there may exist Carrier Frequency Offset (CFO) between the receiver and the transmitter due to the relative motion or the difference in oscillators or environmental changes. CFO has an important impact on signal detection and system performance, but most current studies ignore the CFO. In this paper, a fast CFO detection method suitable for Frequency Shift Keying (FSK) modulation is designed, which can quickly and effectively detect without pilots whether there exists CFO and find out the location where the CFO begins. Specifically, this paper designs one detector amplitude-taking method. Next, based on CUmulative SUM (CUSUM) algorithm a fast detection algorithm is designed to detect the location of CFO. Finally, simulation results are provided to corroborate the proposed studies. The simulation results show that the designed detector can effectively locate the position where CFO appears.
Recently, passive backscatter communication technology has attracted extensive attention as one key technology for green Internet of Things. In one typical backscatter communication system, there may exist Carrier Frequency Offset (CFO) between the receiver and the transmitter due to the relative motion or the difference in oscillators or environmental changes. CFO has an important impact on signal detection and system performance, but most current studies ignore the CFO. In this paper, a fast CFO detection method suitable for Frequency Shift Keying (FSK) modulation is designed, which can quickly and effectively detect without pilots whether there exists CFO and find out the location where the CFO begins. Specifically, this paper designs one detector amplitude-taking method. Next, based on CUmulative SUM (CUSUM) algorithm a fast detection algorithm is designed to detect the location of CFO. Finally, simulation results are provided to corroborate the proposed studies. The simulation results show that the designed detector can effectively locate the position where CFO appears.
2023, 45(7): 2317-2324.
doi: 10.11999/JEIT221195
Abstract:
To model the radio wave propagation within channels of Backscatter Communication (BackCom) systems with Intelligent Reflecting Surface (IRS) included, an efficient hybrid method based on the Parabolic Equation (PE) method and Method of Moment (MoM) is proposed in this paper. The propagation modeling of IRS-assisted channels in electrically-large scenarios is considered in this method through aspects of radio wave propagation and electromagnetic scattering. The two aspects are then numerically solved by the PE method and MoM, respectively. Through simulations of IRS-assisted channels in line-of-sight as well as non-line-of-sight scenario, the efficiency of the PE-MoM hybrid method is demonstrated. Simulation results show that the computational speed of the proposed algorithm is 6.46 times faster than that of MoM. Meanwhile, the computational resource consumption is also reduced by 81% with the relative root mean square error maintained as 3.89%. The comparison of results shows that the proposed PE-MoM hybrid method can realize the propagation simulation of the IRS-assisted BackCom channels with a better tradeoff between the computational accuracy and computational efficiency achieved.
To model the radio wave propagation within channels of Backscatter Communication (BackCom) systems with Intelligent Reflecting Surface (IRS) included, an efficient hybrid method based on the Parabolic Equation (PE) method and Method of Moment (MoM) is proposed in this paper. The propagation modeling of IRS-assisted channels in electrically-large scenarios is considered in this method through aspects of radio wave propagation and electromagnetic scattering. The two aspects are then numerically solved by the PE method and MoM, respectively. Through simulations of IRS-assisted channels in line-of-sight as well as non-line-of-sight scenario, the efficiency of the PE-MoM hybrid method is demonstrated. Simulation results show that the computational speed of the proposed algorithm is 6.46 times faster than that of MoM. Meanwhile, the computational resource consumption is also reduced by 81% with the relative root mean square error maintained as 3.89%. The comparison of results shows that the proposed PE-MoM hybrid method can realize the propagation simulation of the IRS-assisted BackCom channels with a better tradeoff between the computational accuracy and computational efficiency achieved.
2023, 45(7): 2325-2333.
doi: 10.11999/JEIT221483
Abstract:
To improve spectrum transmission efficiency and suppress the effect of channel uncertainties, a throughput-maximization algorithm is proposed for Cognitive Backscatter Communication with imperfect channel state information. Firstly, considering the constraints of the maximum transmit power of the Primary Base Station (PBS), transmission time, user quality of service, and bounded channel uncertainty, a multivariable coupled nonlinear robust throughput-maximization model is formulated by jointly optimizing the PBS’s beamforming vector, the reflection coefficient and the transmission time. Then, the original problem is transformed into a convex optimization problem by using the worst-case approach, the S-Procedure, successive convex approximation, alternating optimization, and an iteration-based robust resource allocation algorithm is proposed to solve it. Simulation results show that the proposed algorithm has better throughput and robustness compared with the non-robust algorithm, and the outage probability is reduced by 2.39%.
To improve spectrum transmission efficiency and suppress the effect of channel uncertainties, a throughput-maximization algorithm is proposed for Cognitive Backscatter Communication with imperfect channel state information. Firstly, considering the constraints of the maximum transmit power of the Primary Base Station (PBS), transmission time, user quality of service, and bounded channel uncertainty, a multivariable coupled nonlinear robust throughput-maximization model is formulated by jointly optimizing the PBS’s beamforming vector, the reflection coefficient and the transmission time. Then, the original problem is transformed into a convex optimization problem by using the worst-case approach, the S-Procedure, successive convex approximation, alternating optimization, and an iteration-based robust resource allocation algorithm is proposed to solve it. Simulation results show that the proposed algorithm has better throughput and robustness compared with the non-robust algorithm, and the outage probability is reduced by 2.39%.
2023, 45(7): 2334-2341.
doi: 10.11999/JEIT221062
Abstract:
In order to alleviate problem of users’ energy shortage in edge computing networks, a computing task offloading and resource allocation scheme in Unmanned Aerial Vehicle (UAV)-assisted backscatter communication network is proposed. Firstly, a problem for minimizing the total energy consumption of UAV is formulated by the joint design of UAV trajectory, calculation frequency of users, task offloading ratio, the transmission power of the UAV and users, backscattering time allocation and active communication time allocation. Then, through alternate iteration method, the original non-convex problem is divided into two subproblems which are solved by the successive convex approximation method. Numerical results demonstrate that the proposed algorithm reduces effectively the UAV energy consumption and has good convergence performance.
In order to alleviate problem of users’ energy shortage in edge computing networks, a computing task offloading and resource allocation scheme in Unmanned Aerial Vehicle (UAV)-assisted backscatter communication network is proposed. Firstly, a problem for minimizing the total energy consumption of UAV is formulated by the joint design of UAV trajectory, calculation frequency of users, task offloading ratio, the transmission power of the UAV and users, backscattering time allocation and active communication time allocation. Then, through alternate iteration method, the original non-convex problem is divided into two subproblems which are solved by the successive convex approximation method. Numerical results demonstrate that the proposed algorithm reduces effectively the UAV energy consumption and has good convergence performance.
2023, 45(7): 2342-2349.
doi: 10.11999/JEIT221534
Abstract:
Ambient backscatter cellular network can support both cellular communication and ambient backscatter communication, which has a broad application prospect, but there is serious interference between ambient backscatter signals and cellular network signals. To solve this problem, a Cascade Interference Alignment (CIA) algorithm for ambient backscatter cellular networks is proposed. In order to align the interference of base station signals to reader and users, a two-tier precoding matrix is designed. Considering the limitation that the computing capacity of the backscatter node is weak and it can not design the precoding matrix independently, the backscatter signal is pre-coded by combining the channel state information from the base station to the backscattering device. A two-tier interference suppressing matrix for users and a three-tier interference suppressing matrix for reader are designed to eliminate the interference from different sources. The simulation results show that the proposed algorithm can eliminate the complex interference in the ambient backscattering cellular network, ensure the normal transmission of the cellular network signal and backscattering signal, and provide the better sum rate performance.
Ambient backscatter cellular network can support both cellular communication and ambient backscatter communication, which has a broad application prospect, but there is serious interference between ambient backscatter signals and cellular network signals. To solve this problem, a Cascade Interference Alignment (CIA) algorithm for ambient backscatter cellular networks is proposed. In order to align the interference of base station signals to reader and users, a two-tier precoding matrix is designed. Considering the limitation that the computing capacity of the backscatter node is weak and it can not design the precoding matrix independently, the backscatter signal is pre-coded by combining the channel state information from the base station to the backscattering device. A two-tier interference suppressing matrix for users and a three-tier interference suppressing matrix for reader are designed to eliminate the interference from different sources. The simulation results show that the proposed algorithm can eliminate the complex interference in the ambient backscattering cellular network, ensure the normal transmission of the cellular network signal and backscattering signal, and provide the better sum rate performance.
2023, 45(7): 2350-2357.
doi: 10.11999/JEIT220778
Abstract:
The outage performance of the primary and secondary systems in a commensal symbiotic radio network with energy harvesting is investigated. First, on the basis of the energy-causality constraint of the secondary user, the signal-to-noise ratios to decode the primary and secondary systems are given, and the outage probabilities of primary and secondary systems are defined. Based on this, closed-form expressions for the outage probability of the primary and secondary systems under the Rayleigh channel fading model are derived, and then the diversity gain of the primary and secondary systems are obtained. It is shown that the access of the secondary users can bring beneficial diversity gain to the primary system, i.e., the diversity gain of the primary system is increased from 1 to 2. Finally, the correctness of the theoretical analysis is verified by simulations, and the effects of different system parameters on the primary and secondary system outage probabilities are investigated.
The outage performance of the primary and secondary systems in a commensal symbiotic radio network with energy harvesting is investigated. First, on the basis of the energy-causality constraint of the secondary user, the signal-to-noise ratios to decode the primary and secondary systems are given, and the outage probabilities of primary and secondary systems are defined. Based on this, closed-form expressions for the outage probability of the primary and secondary systems under the Rayleigh channel fading model are derived, and then the diversity gain of the primary and secondary systems are obtained. It is shown that the access of the secondary users can bring beneficial diversity gain to the primary system, i.e., the diversity gain of the primary system is increased from 1 to 2. Finally, the correctness of the theoretical analysis is verified by simulations, and the effects of different system parameters on the primary and secondary system outage probabilities are investigated.
2023, 45(7): 2358-2365.
doi: 10.11999/JEIT221210
Abstract:
The combination of Unmanned Aerial Vehicle (UAV), Non-Orthogonal Multiple Access (NOMA), and Backscatter Communication (BC) can meet the high capacity demand and improve the communication quality in hotspots. A max-min rate optimization algorithm is proposed for UAV-assisted NOMA-based backscatter communication systems. Specifically, a resource allocation model is developed to maximize systems’ minimum rate under the UAV transmit power, energy harvesting, reflection coefficient, transmission rate, and Successive Interference Cancellation (SIC) decoding order constraints. The original problem is divided into three subproblems: UAV transmit power optimization, reflection coefficient optimization, and joint optimization of UAV position and SIC decoding order optimization, which are handled by block coordinated decent method. Then, the UAV’s optimal transmit power optimization subproblem is solved by contradiction. Furthermore, the remaining subproblems are solved by convex optimization with variable substitution and successive convex approximation methods. Finally, the simulation results show that the proposed algorithm has obtained a good tradeoff between the systems’ sum rate and users’ fairness.
The combination of Unmanned Aerial Vehicle (UAV), Non-Orthogonal Multiple Access (NOMA), and Backscatter Communication (BC) can meet the high capacity demand and improve the communication quality in hotspots. A max-min rate optimization algorithm is proposed for UAV-assisted NOMA-based backscatter communication systems. Specifically, a resource allocation model is developed to maximize systems’ minimum rate under the UAV transmit power, energy harvesting, reflection coefficient, transmission rate, and Successive Interference Cancellation (SIC) decoding order constraints. The original problem is divided into three subproblems: UAV transmit power optimization, reflection coefficient optimization, and joint optimization of UAV position and SIC decoding order optimization, which are handled by block coordinated decent method. Then, the UAV’s optimal transmit power optimization subproblem is solved by contradiction. Furthermore, the remaining subproblems are solved by convex optimization with variable substitution and successive convex approximation methods. Finally, the simulation results show that the proposed algorithm has obtained a good tradeoff between the systems’ sum rate and users’ fairness.
Passive WiFi Internet of Things Backscatter Communication Based on Electromagnetic Energy Harvesting
2023, 45(7): 2366-2374.
doi: 10.11999/JEIT220951
Abstract:
To solve the problems of high power consumption, requiring manual maintenance and frequent battery replacement of traditional Internet of Things (IoT) communication device, a passive WiFi IoT backscattering communication method based on electromagnetic wave energy harvesting is proposed. The device is implemented based on a low-power microprocessor platform. It can use the electromagnetic wave energy collected by itself to achieve ultra-low-power WiFi scattering communication, having advantages of low power consumption, no batteries, small size, low production cost, and no manual maintenance. It can be widely used in IoT applications.
To solve the problems of high power consumption, requiring manual maintenance and frequent battery replacement of traditional Internet of Things (IoT) communication device, a passive WiFi IoT backscattering communication method based on electromagnetic wave energy harvesting is proposed. The device is implemented based on a low-power microprocessor platform. It can use the electromagnetic wave energy collected by itself to achieve ultra-low-power WiFi scattering communication, having advantages of low power consumption, no batteries, small size, low production cost, and no manual maintenance. It can be widely used in IoT applications.
2023, 45(7): 2375-2385.
doi: 10.11999/JEIT220613
Abstract:
Helmet antenna is a kind of antenna that is conformal to individual wearable helmet with unique man-made structure or material. It is the core component of individual soldier wireless communication system. In most of the current schemes of helmet antennas, it is only focused on one or two aspects of antenna performances such as omnidirectional radiation, wide bandwidth, high gain and low Specific Absorption Rate (SAR), which is difficult to meet the volatile changing tactical requirements. Recently, a variety of new structures of antenna designing and impedance matching methods based on the artificial magnetic conductors and metamaterials have been proposed, which make it hopeful for helmet antenna to break through the technical problems of mutual restriction among gain, bandwidth, size, weight and electromagnetic radiation. In this paper, the research progress of helmet antenna in radiation direction controlling, bandwidth expanding, gain enhancement and reduction of specific absorption rate is firstly summarized. Meanwhile the breakthroughs of helmet antenna in the future is anticipated and a kind of round array helmet antenna designing based on non-Foster circuit is proposed.
Helmet antenna is a kind of antenna that is conformal to individual wearable helmet with unique man-made structure or material. It is the core component of individual soldier wireless communication system. In most of the current schemes of helmet antennas, it is only focused on one or two aspects of antenna performances such as omnidirectional radiation, wide bandwidth, high gain and low Specific Absorption Rate (SAR), which is difficult to meet the volatile changing tactical requirements. Recently, a variety of new structures of antenna designing and impedance matching methods based on the artificial magnetic conductors and metamaterials have been proposed, which make it hopeful for helmet antenna to break through the technical problems of mutual restriction among gain, bandwidth, size, weight and electromagnetic radiation. In this paper, the research progress of helmet antenna in radiation direction controlling, bandwidth expanding, gain enhancement and reduction of specific absorption rate is firstly summarized. Meanwhile the breakthroughs of helmet antenna in the future is anticipated and a kind of round array helmet antenna designing based on non-Foster circuit is proposed.
2023, 45(7): 2386-2394.
doi: 10.11999/JEIT220707
Abstract:
Brain-Computer Interface(BCI) establishes a direct communication pathway between the brain and external devices without relying on peripheral nerves and muscles. In recent years, great breakthroughs in recognition accuracy and system interaction rate have been made by this technology. However, the non-stationary characteristics of ElectroEncephaloGram(EEG) signals are strong and the user's subjective state fluctuates greatly. Traditional BCI technology lacks adaptability to the dynamic changes of brain activity, so the control stability of the BCI system is affected and its intelligence development and application are limited. The adaptive BCI can dynamically adjust the evoked paradigm and update the recognition model in real time according to the current state of the brain, thereby enhancing the adaptability of the brain control system to non-stationary brain activities, improving its control accuracy and robustness, and achieving a more practical brain control system, which is highly meaningful to push the further development of BCI technology. The related research of adaptive BCI is reviewed and summarized in this paper, and an outlook of the future development direction of this technology is given.
Brain-Computer Interface(BCI) establishes a direct communication pathway between the brain and external devices without relying on peripheral nerves and muscles. In recent years, great breakthroughs in recognition accuracy and system interaction rate have been made by this technology. However, the non-stationary characteristics of ElectroEncephaloGram(EEG) signals are strong and the user's subjective state fluctuates greatly. Traditional BCI technology lacks adaptability to the dynamic changes of brain activity, so the control stability of the BCI system is affected and its intelligence development and application are limited. The adaptive BCI can dynamically adjust the evoked paradigm and update the recognition model in real time according to the current state of the brain, thereby enhancing the adaptability of the brain control system to non-stationary brain activities, improving its control accuracy and robustness, and achieving a more practical brain control system, which is highly meaningful to push the further development of BCI technology. The related research of adaptive BCI is reviewed and summarized in this paper, and an outlook of the future development direction of this technology is given.
2023, 45(7): 2395-2405.
doi: 10.11999/JEIT220678
Abstract:
In order to reduce the Peak-to-Average Power Ratio (PAPR) and improve the security of the Orthogonal Time and Frequency Space (OTFS) system, a low PAPR secure transmission method based on the U matrix transformation in OTFS system is proposed in this paper. In this method, the initial key is generated through the Delay-Doppler (DD) domain of wireless channel, which is used to generate further chaotic sequences. The U matrix is designed by the chaotic sequence, which makes the symbols after the U matrix transformation are completely confused and noise-like. Besides, the U matrix selections can be controlled by the index. The transmitter sortes the OTFS time domain signals obtained from different U matrix transformations and selectes the signal with the lowest PAPR for transmission. The encrypted signal can be correctly obtained by the legitimate receiver after obtaining the index value. However, the eavesdropper cannot decrypt the information even if he obtained the transmitted index value. The simulation results show that the proposed scheme can reduce the PAPR of OTFS system effectively while ensuring the system reliability. In addition, the constellation diagram the U matrix transformation becomes spherical chaos, which makes the modulation method and information hidden. The decryption difficulty of the eavesdropper is greatly increased, and the security of the system is effectively enhanced.
In order to reduce the Peak-to-Average Power Ratio (PAPR) and improve the security of the Orthogonal Time and Frequency Space (OTFS) system, a low PAPR secure transmission method based on the U matrix transformation in OTFS system is proposed in this paper. In this method, the initial key is generated through the Delay-Doppler (DD) domain of wireless channel, which is used to generate further chaotic sequences. The U matrix is designed by the chaotic sequence, which makes the symbols after the U matrix transformation are completely confused and noise-like. Besides, the U matrix selections can be controlled by the index. The transmitter sortes the OTFS time domain signals obtained from different U matrix transformations and selectes the signal with the lowest PAPR for transmission. The encrypted signal can be correctly obtained by the legitimate receiver after obtaining the index value. However, the eavesdropper cannot decrypt the information even if he obtained the transmitted index value. The simulation results show that the proposed scheme can reduce the PAPR of OTFS system effectively while ensuring the system reliability. In addition, the constellation diagram the U matrix transformation becomes spherical chaos, which makes the modulation method and information hidden. The decryption difficulty of the eavesdropper is greatly increased, and the security of the system is effectively enhanced.
2023, 45(7): 2406-2414.
doi: 10.11999/JEIT220797
Abstract:
Collision avoidance tasks in intelligent driving have challenges such as extremely high latency requirements and privacy protection. First, a Gated Recurrent Unit_Support Vector Machine (GRU_SVM) collision multi-level warning algorithm based on Semi-asynchronous Federated Learning with Adaptive Adjustment of Parameters (SFLAAP) is proposed. SFLAAP can dynamically adjust two training parameters according to training and resource conditions: the number of local training times and the number of local models participating in aggregation. Then, in order to solve the efficiency problem of collaborative training of collision warning model under resource-constrained Mobile Edge Computing (MEC), according to the relationship between the above parameters and SFLAAP training delay, a model for minimizing the total training delay is established, and it is transformed into a Markov Decision Process (MDP). Finally, in the established MDP, the Asynchronous Advantage Actor-Critic (A3C) algorithm is employed to determine adaptively the optimal training parameters, thereby reducing the training completion time of the collision warning model. The simulation results show that the proposed algorithm can effectively reduce the total training delay and ensure prediction accuracy.
Collision avoidance tasks in intelligent driving have challenges such as extremely high latency requirements and privacy protection. First, a Gated Recurrent Unit_Support Vector Machine (GRU_SVM) collision multi-level warning algorithm based on Semi-asynchronous Federated Learning with Adaptive Adjustment of Parameters (SFLAAP) is proposed. SFLAAP can dynamically adjust two training parameters according to training and resource conditions: the number of local training times and the number of local models participating in aggregation. Then, in order to solve the efficiency problem of collaborative training of collision warning model under resource-constrained Mobile Edge Computing (MEC), according to the relationship between the above parameters and SFLAAP training delay, a model for minimizing the total training delay is established, and it is transformed into a Markov Decision Process (MDP). Finally, in the established MDP, the Asynchronous Advantage Actor-Critic (A3C) algorithm is employed to determine adaptively the optimal training parameters, thereby reducing the training completion time of the collision warning model. The simulation results show that the proposed algorithm can effectively reduce the total training delay and ensure prediction accuracy.
2023, 45(7): 2415-2422.
doi: 10.11999/JEIT220721
Abstract:
To meet the network requirements and improve the utilization of system spectrum, a Cognitive Radio Non-Orthogonal Multiple Access (CR-NOMA) technology is proposed. To investigate the system reliability, NonLinear Power Amplification (NLPA), incomplete Successive Interference Cancellation (ipSIC) and incomplete Channel State Information (CSI) are taken into account. The analytical expressions of system Outage Probability (OP) and system throughput are derived, and the expressions of outage probability under high SNR, high SNR approximation of outage probability under ideal state and diversity order are further analyzed. The simulation results show that: NLPA, ipSIC and channel estimation error parameters have negative effects on interrupt probability; The interrupt probability decreases with the increase of SNR until it converges to a fixed constant at a high SNR; Interruption probability will also change with the change of power distribution coefficient.
To meet the network requirements and improve the utilization of system spectrum, a Cognitive Radio Non-Orthogonal Multiple Access (CR-NOMA) technology is proposed. To investigate the system reliability, NonLinear Power Amplification (NLPA), incomplete Successive Interference Cancellation (ipSIC) and incomplete Channel State Information (CSI) are taken into account. The analytical expressions of system Outage Probability (OP) and system throughput are derived, and the expressions of outage probability under high SNR, high SNR approximation of outage probability under ideal state and diversity order are further analyzed. The simulation results show that: NLPA, ipSIC and channel estimation error parameters have negative effects on interrupt probability; The interrupt probability decreases with the increase of SNR until it converges to a fixed constant at a high SNR; Interruption probability will also change with the change of power distribution coefficient.
2023, 45(7): 2423-2431.
doi: 10.11999/JEIT220719
Abstract:
Considering the problem of data congestion in mobile networks caused by the rapid growth of smart applications in Internet of Things (IoT), a cloud-fog hybrid computing model based on cluster-collaboration is constructed. The cluster load balancing is considered while introducing weighting factors to balance the computational latency and energy consumption, and finally the minimum weighted sum of system latency and energy consumption is achieved. In order to solve this mixed integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker (KKT) condition and bisection search iterative method. Then an Overhead Minimization Offloading Algorithm based on Branch and Brand (BB-OMOA) is proposed to obtain the optimal offloading decision. Simulation results show that the cluster-collaboration model improves significantly the system load balancing degree and the proposed strategy outperforms significantly other benchmark schemes.
Considering the problem of data congestion in mobile networks caused by the rapid growth of smart applications in Internet of Things (IoT), a cloud-fog hybrid computing model based on cluster-collaboration is constructed. The cluster load balancing is considered while introducing weighting factors to balance the computational latency and energy consumption, and finally the minimum weighted sum of system latency and energy consumption is achieved. In order to solve this mixed integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker (KKT) condition and bisection search iterative method. Then an Overhead Minimization Offloading Algorithm based on Branch and Brand (BB-OMOA) is proposed to obtain the optimal offloading decision. Simulation results show that the cluster-collaboration model improves significantly the system load balancing degree and the proposed strategy outperforms significantly other benchmark schemes.
2023, 45(7): 2432-2442.
doi: 10.11999/JEIT220692
Abstract:
Considering the problem of how to take into account the accuracy and timeliness of Controller Area Network(CAN) anomaly detection under the constraints of limited vehicle resources, an adaptive optimization method for CAN anomaly detection is proposed. Firstly, based on information entropy, the quantification index of the accuracy and timeliness of CAN network anomaly detection is established, and the CAN anomaly detection is modeled as a multi-objective optimization problem. Then, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) algorithm for solving the multi-objective optimization problem is designed. The Pareto frontier is used as the optimization and adjustment space of the parameters of the CAN anomaly detection model, and a robust control mechanism of the detection model is proposed to meet the needs of different scenarios. Through experimental analysis, the influence of optimization parameters on anomaly detection is deeply analyzed, and it is verified that the proposed method can adapt to the needs of diverse detection scenarios under limited vehicle resources.
Considering the problem of how to take into account the accuracy and timeliness of Controller Area Network(CAN) anomaly detection under the constraints of limited vehicle resources, an adaptive optimization method for CAN anomaly detection is proposed. Firstly, based on information entropy, the quantification index of the accuracy and timeliness of CAN network anomaly detection is established, and the CAN anomaly detection is modeled as a multi-objective optimization problem. Then, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) algorithm for solving the multi-objective optimization problem is designed. The Pareto frontier is used as the optimization and adjustment space of the parameters of the CAN anomaly detection model, and a robust control mechanism of the detection model is proposed to meet the needs of different scenarios. Through experimental analysis, the influence of optimization parameters on anomaly detection is deeply analyzed, and it is verified that the proposed method can adapt to the needs of diverse detection scenarios under limited vehicle resources.
2023, 45(7): 2443-2450.
doi: 10.11999/JEIT220775
Abstract:
To solve the problem of severe transmission attenuation of TeraHertz (THz) links, and the performance degradation of traditional channel estimation schemes caused by beam squint in wideband systems, a multi-user THz communication model assisted by Reconfigurable Intelligent Surfaces (RIS) is constructed in this paper, and a low complexity two-stage cascaded channel estimation scheme is proposed. In the first stage, the channel estimation problem is transformed into an objective optimization problem by using the sparsity of the THz and log-sum function, and the objective function is optimized by gradient descent method, so that the channel parameters to be estimated are iteratively close to the optimal solution, thus the typical user cascade channel is estimated. In the second stage, the cascade channels of other users are estimated with lower pilot overhead by using the strong correlation between the cascade channels of other users and the typical user channel. The simulation results show that the proposed scheme has better performance than other schemes.
To solve the problem of severe transmission attenuation of TeraHertz (THz) links, and the performance degradation of traditional channel estimation schemes caused by beam squint in wideband systems, a multi-user THz communication model assisted by Reconfigurable Intelligent Surfaces (RIS) is constructed in this paper, and a low complexity two-stage cascaded channel estimation scheme is proposed. In the first stage, the channel estimation problem is transformed into an objective optimization problem by using the sparsity of the THz and log-sum function, and the objective function is optimized by gradient descent method, so that the channel parameters to be estimated are iteratively close to the optimal solution, thus the typical user cascade channel is estimated. In the second stage, the cascade channels of other users are estimated with lower pilot overhead by using the strong correlation between the cascade channels of other users and the typical user channel. The simulation results show that the proposed scheme has better performance than other schemes.
A Hybrid Precoding Scheme for Millimeter Wave Massive MIMO System with Residual Hardware Impairments
2023, 45(7): 2451-2458.
doi: 10.11999/JEIT220724
Abstract:
Based on the assumption of perfect transceiver hardware, the hybrid precoding of millimeter wave massive Multiple-Input Multiple-Output (MIMO) system has been extensively studied. However, the residual hardware impairments caused by non-ideal characteristics of transceivers are unavoidable in millimeter wave massive MIMO system, which lead to hybrid precoding performance degradation seriously. To address this problem, the hybrid precoding model which considers the impact of residual hardware impairments is built for millimeter wave massive MIMO system and a hybrid precoding scheme based on manifold optimization is proposed in this paper. Firstly, the optimization objective is designed with the modified mean square error. Then the closed-form expressions of the digital precoding and digital combining matrix are derived and the optimal solutions of the analog precoding matrix and the combining matrix are obtained by dealing with the constant modulus constraint on the Riemannian manifold. Finally, the joint hybrid precoding and combining design are achieved by iteratively and alternatively optimizing the hybrid precoding and the hybrid combining matrix. The simulation results show that the proposed scheme suppresses effectively the impact of residual hardware impairments on the millimeter-wave massive MIMO system, and improves significantly the system performance.
Based on the assumption of perfect transceiver hardware, the hybrid precoding of millimeter wave massive Multiple-Input Multiple-Output (MIMO) system has been extensively studied. However, the residual hardware impairments caused by non-ideal characteristics of transceivers are unavoidable in millimeter wave massive MIMO system, which lead to hybrid precoding performance degradation seriously. To address this problem, the hybrid precoding model which considers the impact of residual hardware impairments is built for millimeter wave massive MIMO system and a hybrid precoding scheme based on manifold optimization is proposed in this paper. Firstly, the optimization objective is designed with the modified mean square error. Then the closed-form expressions of the digital precoding and digital combining matrix are derived and the optimal solutions of the analog precoding matrix and the combining matrix are obtained by dealing with the constant modulus constraint on the Riemannian manifold. Finally, the joint hybrid precoding and combining design are achieved by iteratively and alternatively optimizing the hybrid precoding and the hybrid combining matrix. The simulation results show that the proposed scheme suppresses effectively the impact of residual hardware impairments on the millimeter-wave massive MIMO system, and improves significantly the system performance.
2023, 45(7): 2459-2466.
doi: 10.11999/JEIT220729
Abstract:
The Internet of Vehicles(IoV) edge computing can provide low-latency services for vehicle users by deploying computing resources at the edge of the network. In this paper, the delay and data backlog performance of vehicular edge computing of the IoV are analyzed by the Moment-Generating Function(MGF) method of Stochastic Network Calculus(SNC) theory. Firstly, mathematical models of the arrival process of high-priority and low-priority tasks, the single-hop millimeter-wave communication service process, and the edge computing service process are established, respectively; Secondly, according to the service cascade theorem, the service process and the moment-generating function expression of different priority tasks of vehicles in the multi-hop network are obtained. Then, the closed-form solutions of delay and data backlog probability bounds of vehicle tasks with different priorities being completely offloaded to the edge server in millimeter-wave multi-hop communication are derived. Finally, the performance of the closed-form solution is verified by Monte Carlo simulation.
The Internet of Vehicles(IoV) edge computing can provide low-latency services for vehicle users by deploying computing resources at the edge of the network. In this paper, the delay and data backlog performance of vehicular edge computing of the IoV are analyzed by the Moment-Generating Function(MGF) method of Stochastic Network Calculus(SNC) theory. Firstly, mathematical models of the arrival process of high-priority and low-priority tasks, the single-hop millimeter-wave communication service process, and the edge computing service process are established, respectively; Secondly, according to the service cascade theorem, the service process and the moment-generating function expression of different priority tasks of vehicles in the multi-hop network are obtained. Then, the closed-form solutions of delay and data backlog probability bounds of vehicle tasks with different priorities being completely offloaded to the edge server in millimeter-wave multi-hop communication are derived. Finally, the performance of the closed-form solution is verified by Monte Carlo simulation.
2023, 45(7): 2467-2475.
doi: 10.11999/JEIT220786
Abstract:
An identification method of Line-of-Sight (LOS) propagation for indoor localization at millimeter band is proposed in this paper. Based on beam training process, the proposed method identifies the LOS clusters in power angular spectrum. After clustering the Power Angular Spectrum (PAS), the statistical characteristics of five different channel metrics intra-clusters, namely the spatial-domain symmetry, kurtosis of impulse response and transfer function, mean excess delay, and Root Mean Square (RMS) delay spread, are analyzed using maximum likelihood ratio and artificial neural network. A noticeable difference between LOS and Non- Line-of-Sight (NLOS) clusters is observed, and validated with measurement.
An identification method of Line-of-Sight (LOS) propagation for indoor localization at millimeter band is proposed in this paper. Based on beam training process, the proposed method identifies the LOS clusters in power angular spectrum. After clustering the Power Angular Spectrum (PAS), the statistical characteristics of five different channel metrics intra-clusters, namely the spatial-domain symmetry, kurtosis of impulse response and transfer function, mean excess delay, and Root Mean Square (RMS) delay spread, are analyzed using maximum likelihood ratio and artificial neural network. A noticeable difference between LOS and Non- Line-of-Sight (NLOS) clusters is observed, and validated with measurement.
2023, 45(7): 2476-2483.
doi: 10.11999/JEIT220857
Abstract:
The service content in the Internet of Vehicles scenario is massive and highly dynamic, which makes the traditional caching mechanism unable to perceive better the dynamic changes of the content, and the contradiction between the huge number of access devices and the limited resources of edge cache devices will cause the problem of poor system latency performance. In view of the above problems, a reinforcement learning-based joint content caching and power allocation algorithm is proposed. First, considering the joint optimization of content caching and power allocation, an optimization model is established to minimize the overall system delay. Second, this optimization problem is modeled as a Markov Decision Process (MDP), and the selection of content caches and content providers is further mapped as discrete action sets, and power allocation is mapped as continuous parameters corresponding to discrete actions. Finally, this problem with a discrete-continuous mixed action space is solved with the aid of the Parametric Deep Q-Networks (P-DQN) algorithm. The simulation results show that the proposed algorithm can improve the local cache hit rate and reduce the system transmission delay compared with the comparison algorithms.
The service content in the Internet of Vehicles scenario is massive and highly dynamic, which makes the traditional caching mechanism unable to perceive better the dynamic changes of the content, and the contradiction between the huge number of access devices and the limited resources of edge cache devices will cause the problem of poor system latency performance. In view of the above problems, a reinforcement learning-based joint content caching and power allocation algorithm is proposed. First, considering the joint optimization of content caching and power allocation, an optimization model is established to minimize the overall system delay. Second, this optimization problem is modeled as a Markov Decision Process (MDP), and the selection of content caches and content providers is further mapped as discrete action sets, and power allocation is mapped as continuous parameters corresponding to discrete actions. Finally, this problem with a discrete-continuous mixed action space is solved with the aid of the Parametric Deep Q-Networks (P-DQN) algorithm. The simulation results show that the proposed algorithm can improve the local cache hit rate and reduce the system transmission delay compared with the comparison algorithms.
2023, 45(7): 2484-2493.
doi: 10.11999/JEIT220798
Abstract:
Considering the problems of a predictable primary node, high communication complexity, and lack of punishment mechanism for malicious nodes in Practical Byzantine Fault Tolerance (PBFT) algorithm, a consortium chain Byzantine Fault-Tolerant algorithm based on Perfect Binary Tree communication topology(PBT-BFT) is proposed. In PBT-BFT, a reputation evaluation model is designed to evaluate the behavior of nodes. At the same time, a Reputation-based Verifiable Random Function (R-VRF) is proposed, which makes the probability of random extraction positively correlated with the reputation value, and ensures the fairness and randomness of the lottery for nodes with different reputation values. Then, a perfect binary tree communication topology is designed to reduce the communication complexity to linear complexity, and a rotating primary node and Pipelining are proposed to improve the consensus efficiency. The experimental results show that compared with PBFT, the average throughput is increased by 121.6%, and the average delay is reduced by 73.8%, which can be well applied to the consortium chain of large-scale network nodes.
Considering the problems of a predictable primary node, high communication complexity, and lack of punishment mechanism for malicious nodes in Practical Byzantine Fault Tolerance (PBFT) algorithm, a consortium chain Byzantine Fault-Tolerant algorithm based on Perfect Binary Tree communication topology(PBT-BFT) is proposed. In PBT-BFT, a reputation evaluation model is designed to evaluate the behavior of nodes. At the same time, a Reputation-based Verifiable Random Function (R-VRF) is proposed, which makes the probability of random extraction positively correlated with the reputation value, and ensures the fairness and randomness of the lottery for nodes with different reputation values. Then, a perfect binary tree communication topology is designed to reduce the communication complexity to linear complexity, and a rotating primary node and Pipelining are proposed to improve the consensus efficiency. The experimental results show that compared with PBFT, the average throughput is increased by 121.6%, and the average delay is reduced by 73.8%, which can be well applied to the consortium chain of large-scale network nodes.
2023, 45(7): 2494-2501.
doi: 10.11999/JEIT220792
Abstract:
In order to solve the problem that the positioning performance of the closed solution algorithm can not reach the Cramer Rao Lower Bound (CRLB) and the initial value selection of Newton iterative algorithm in the underwater three-dimensional positioning based on spatial angle information, a highly robust algorithm based on iterative least squares is used to correct the residual term of the closed form solution and select the initial value of the iterative algorithm. The pseudo linear Weighted Least Squares (WLS) algorithm is used to obtain the closed form solution as the initial value of the regularization modified iterative method. The iterative result is used to modify the residual term of the closed form solution algorithm. Through the alternating operation of the iterative least squares method, a stable and accurate solution is obtained. Through simulation, the high robustness of the iterative least squares algorithm is verified, the adverse effect of the selection of the residual term in the pseudo linear weighted least squares algorithm is eliminated, the problem of selecting the initial value of the iterative method is solved, and the positioning performance similar to that of the iterative method in the case of convergence is obtained.
In order to solve the problem that the positioning performance of the closed solution algorithm can not reach the Cramer Rao Lower Bound (CRLB) and the initial value selection of Newton iterative algorithm in the underwater three-dimensional positioning based on spatial angle information, a highly robust algorithm based on iterative least squares is used to correct the residual term of the closed form solution and select the initial value of the iterative algorithm. The pseudo linear Weighted Least Squares (WLS) algorithm is used to obtain the closed form solution as the initial value of the regularization modified iterative method. The iterative result is used to modify the residual term of the closed form solution algorithm. Through the alternating operation of the iterative least squares method, a stable and accurate solution is obtained. Through simulation, the high robustness of the iterative least squares algorithm is verified, the adverse effect of the selection of the residual term in the pseudo linear weighted least squares algorithm is eliminated, the problem of selecting the initial value of the iterative method is solved, and the positioning performance similar to that of the iterative method in the case of convergence is obtained.
2023, 45(7): 2502-2510.
doi: 10.11999/JEIT220757
Abstract:
The imaging method of low-orbit bistatic SAR based on Frequency Modulated Continuous Wave (FMCW) signal is studied in this paper. The spaceborne bistatic model has the feature of transceiver separation and flexible structure. The nonlinear motion trajectory and bistatic slant range history are not conducive to the derivation and analysis of signal spectrum. The signal is constructed by a fourth-order polynomial slant range model. The expression of the two-dimensional spectrum of the signal is obtained by the method of series reversion. The spatial variation effect of the high-order polynomial coefficients is analyzed in detail. The range migration term is compensated in frequency domain. The azimuth phase is processed by the Singular Value Decomposition (SVD) method. Then the azimuth spectrum is divided into Doppler focusing terms and azimuth variation terms. A nonlinear azimuth scaling function is introduced. The azimuth variation can be completely corrected by two consecutive interpolations and resampling. The validity of the proposed method is verified by the simulation experiments.
The imaging method of low-orbit bistatic SAR based on Frequency Modulated Continuous Wave (FMCW) signal is studied in this paper. The spaceborne bistatic model has the feature of transceiver separation and flexible structure. The nonlinear motion trajectory and bistatic slant range history are not conducive to the derivation and analysis of signal spectrum. The signal is constructed by a fourth-order polynomial slant range model. The expression of the two-dimensional spectrum of the signal is obtained by the method of series reversion. The spatial variation effect of the high-order polynomial coefficients is analyzed in detail. The range migration term is compensated in frequency domain. The azimuth phase is processed by the Singular Value Decomposition (SVD) method. Then the azimuth spectrum is divided into Doppler focusing terms and azimuth variation terms. A nonlinear azimuth scaling function is introduced. The azimuth variation can be completely corrected by two consecutive interpolations and resampling. The validity of the proposed method is verified by the simulation experiments.
2023, 45(7): 2511-2518.
doi: 10.11999/JEIT220776
Abstract:
Multi-band Polarimetric Synthetic Aperture Radar (Multiband-PolSAR) has the capability to obtain multiple observations of ground objects in the two dimensions of frequency and polarization with potential applications in ground information extraction. However, SAR data dimension increase will result in more complex work for data processing and application. Compared with processing single-band single-polarization SAR data, multiband-PolSAR data requires additional consideration of Multiband-PolSAR data registration and fusion. The aim of this study is auto-extraction of building outline with the data obtained by an airborne Multiband-PolSAR system led by Chinese Academy of Sciences and supported by the National High-resolution Observation System Major Project. This study involves the SAR-SIFT method for registration, and proposes a fusion method based on scattering mechanism, to improve the automatic unsupervised building outline extraction method proposed by Ferro et al. (doi: 10.1109/TGRS.2012.2205156), and then forms a new automatic building outline extraction method using Multiband-PolSAR images. The experimental results show that the fusion of multi-band and multi-polarization information will give rise to the contrast of feature image, the continuity of the adjacent pixels, and the accuracy of building extraction. This paper makes a bridge between Multiband-PolSAR technology and building extraction application. Moreover, this article creates conditions for the research on 3D building structure reconstruction by using Multiband-PolSAR data.
Multi-band Polarimetric Synthetic Aperture Radar (Multiband-PolSAR) has the capability to obtain multiple observations of ground objects in the two dimensions of frequency and polarization with potential applications in ground information extraction. However, SAR data dimension increase will result in more complex work for data processing and application. Compared with processing single-band single-polarization SAR data, multiband-PolSAR data requires additional consideration of Multiband-PolSAR data registration and fusion. The aim of this study is auto-extraction of building outline with the data obtained by an airborne Multiband-PolSAR system led by Chinese Academy of Sciences and supported by the National High-resolution Observation System Major Project. This study involves the SAR-SIFT method for registration, and proposes a fusion method based on scattering mechanism, to improve the automatic unsupervised building outline extraction method proposed by Ferro et al. (doi: 10.1109/TGRS.2012.2205156), and then forms a new automatic building outline extraction method using Multiband-PolSAR images. The experimental results show that the fusion of multi-band and multi-polarization information will give rise to the contrast of feature image, the continuity of the adjacent pixels, and the accuracy of building extraction. This paper makes a bridge between Multiband-PolSAR technology and building extraction application. Moreover, this article creates conditions for the research on 3D building structure reconstruction by using Multiband-PolSAR data.
2023, 45(7): 2519-2527.
doi: 10.11999/JEIT220794
Abstract:
Due to the unknown influence of Inter-Carrier Interference (ICI) on the signal in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) underwater acoustic communication systems, the receivers have problems with incomplete interference cancellation or high computational complexity. To solve this problem, an iterative MIMO-OFDM receiver based on ICI depth estimation is proposed. The pilot frequency domain correlation is used to estimate the ICI depth for each transmitted signal. In channel estimation, the frequency domain matrix of each channel is reconstructed by using the estimated interference depth, which avoids selecting the same interference depth for different channels. Therefore, the proposed receiver can self adapt to channel variations and reduce computational complexity. Furthermore, decision feedback equalization is introduced into the MIMO-OFDM underwater acoustic communication system, and the equalized symbols are used to eliminate ICI. Simulation results show that the correct decoding time of the proposed receiver is less than that of the ICI-progressive receiver.
Due to the unknown influence of Inter-Carrier Interference (ICI) on the signal in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) underwater acoustic communication systems, the receivers have problems with incomplete interference cancellation or high computational complexity. To solve this problem, an iterative MIMO-OFDM receiver based on ICI depth estimation is proposed. The pilot frequency domain correlation is used to estimate the ICI depth for each transmitted signal. In channel estimation, the frequency domain matrix of each channel is reconstructed by using the estimated interference depth, which avoids selecting the same interference depth for different channels. Therefore, the proposed receiver can self adapt to channel variations and reduce computational complexity. Furthermore, decision feedback equalization is introduced into the MIMO-OFDM underwater acoustic communication system, and the equalized symbols are used to eliminate ICI. Simulation results show that the correct decoding time of the proposed receiver is less than that of the ICI-progressive receiver.
2023, 45(7): 2528-2536.
doi: 10.11999/JEIT220710
Abstract:
The pattern recognition technology have been widely used in target detection within sea clutter, and the binary pattern recognition algorithm will face the dilemma of catgory disequilibrium when dealing with this problem. The traditional method expands the target data set by adding artificial simulated target echoes, however,the detection result is easily affected by the accuracy of simulation data, and the complexity of the algorithm increases.In this paper, a multi-feature intelligent detection method for small targets within sea clutter based on multi-class classifier is proposed. Firstly, multi-dimensional features are extracted from sea clutter and target data to construct a high-dimensional feature space. Then, based on the “one to one” method of multi-class classification, the sea clutter feature space is divided into multiple subspaces, which is as large as the target data feature space to biuld multiple binary classifiers for joint decision. The binary classifier selected in this paper is the improved two-parameter K-Nearest Neighbor (K-NN) algorithm, which can effectively adjust the false alarm rate. Verified by Ice MultiParameter Imaging X-band radar (IPIX) radar data set, the detection probability of the proposed method is 82.40% when the observation time is 1.024 s, and the performance of the proposed method is improved by 2% compared with the existing feature detectors of the same type.
The pattern recognition technology have been widely used in target detection within sea clutter, and the binary pattern recognition algorithm will face the dilemma of catgory disequilibrium when dealing with this problem. The traditional method expands the target data set by adding artificial simulated target echoes, however,the detection result is easily affected by the accuracy of simulation data, and the complexity of the algorithm increases.In this paper, a multi-feature intelligent detection method for small targets within sea clutter based on multi-class classifier is proposed. Firstly, multi-dimensional features are extracted from sea clutter and target data to construct a high-dimensional feature space. Then, based on the “one to one” method of multi-class classification, the sea clutter feature space is divided into multiple subspaces, which is as large as the target data feature space to biuld multiple binary classifiers for joint decision. The binary classifier selected in this paper is the improved two-parameter K-Nearest Neighbor (K-NN) algorithm, which can effectively adjust the false alarm rate. Verified by Ice MultiParameter Imaging X-band radar (IPIX) radar data set, the detection probability of the proposed method is 82.40% when the observation time is 1.024 s, and the performance of the proposed method is improved by 2% compared with the existing feature detectors of the same type.
2023, 45(7): 2537-2545.
doi: 10.11999/JEIT220735
Abstract:
The authenticated encryption algorithm MORUS is one of the finalists of Competition on Authenticated Encryption: Security, Apllicability, and Robustness (CAESAR). The ability to resist differential analysis is one of the important indicators to evaluate the security of authenticated encryption algorithm. The differential property of the initialization of MORUS is researched in this paper. Firstly, a differential deduction rule is proposed to give fast a differential characteristic with a relatively high probability. Based on this, a better differential characteristic is given by using Mixed-Integer Linear Programming (MILP). To improve the efficiency of solving the MILP model, a Divide-and-Conquer approach is showed. According to the weight and value of\begin{document}$ \Delta {\text{IV}} $\end{document} , the MILP model is divided to many sub-models. The most sub-models are proved to be equivalent, and this reduces dramatically the time to solve the model. The best differential characteristics are given with 1 to 6 state update functions in the initialization of MORUS. Finally, the differential-distinguish attack on the simplified versions of MORUS is showed. This paper improves the result of the previous related work.
The authenticated encryption algorithm MORUS is one of the finalists of Competition on Authenticated Encryption: Security, Apllicability, and Robustness (CAESAR). The ability to resist differential analysis is one of the important indicators to evaluate the security of authenticated encryption algorithm. The differential property of the initialization of MORUS is researched in this paper. Firstly, a differential deduction rule is proposed to give fast a differential characteristic with a relatively high probability. Based on this, a better differential characteristic is given by using Mixed-Integer Linear Programming (MILP). To improve the efficiency of solving the MILP model, a Divide-and-Conquer approach is showed. According to the weight and value of
2023, 45(7): 2546-2553.
doi: 10.11999/JEIT220745
Abstract:
Secure aggregation is a key step to ensure the security and privacy of local model aggregation in federated learning security sharing. However, the existing methods have many problems, such as high computational overhead, poor fairness mechanism, privacy disclosure, and inability to resist quantum attack. Therefore, Tree-Aggregate, an efficient grouping secure aggregation method based on binary tree is proposed in this paper. Firstly, the binary tree based user group security communication protocol can reduce the computation cost from\begin{document}$\textstyle O\left( {N{\text{l}}{{\text{g}}^2}{\text{lg}}N{\text{lglglg}}N} \right)$\end{document} to \begin{document}$\textstyle O\left( {{\text{lg}}N{\text{lg}}N} \right)$\end{document} magnitude and ensure the fairness of the computation cost through the uniform allocation mechanism. Then, a random padding algorithm is proposed to solve the privacy leakage problem caused by a single user. Finally, the anti-quantum attack capability of tree-aggregate method is improved by incorporating lattice-key exchange protocol. Through theoretical analysis, tree-aggregate can change the growth rate of computation cost from linear level to logarithmic level, and through experimental comparative analysis, when the number of users N ≥300, computation cost is reduced by nearly 15 times compared with existing methods.
Secure aggregation is a key step to ensure the security and privacy of local model aggregation in federated learning security sharing. However, the existing methods have many problems, such as high computational overhead, poor fairness mechanism, privacy disclosure, and inability to resist quantum attack. Therefore, Tree-Aggregate, an efficient grouping secure aggregation method based on binary tree is proposed in this paper. Firstly, the binary tree based user group security communication protocol can reduce the computation cost from
2023, 45(7): 2554-2560.
doi: 10.11999/JEIT220804
Abstract:
Encryption authentication is generally adopted to ensure the security of traditional satellite Telemetry Track and Command (TT&C). However, several security limitations remain to be improved such as identity counterfeiting and deception. A satellite TT&C ground station identity recognition method via radio frequency fingerprint is presented, and a lightweight convolutional neural network for satellite platforms is designed. Relevant features of the IQ signal are extracted through the convolution layer in the IQ direction, which converts the two-dimensional data to one dimension. The time-domain structural features of the signal are extracted by using the multi-layer convolution in the time-series direction. Then a maximum pooling layer is developed to reduce the data dimension, ensuring that the original feature information contained in the IQ signal is fully utilized and the computation burden is reduced. Finally, the identification of the satellite TT&C ground station is realized by two full connection layers. Simulation experiments show that the average accuracy of the proposed method for 21 transmitters is 93.8%, which is 39.8% higher than the traditional support vector machine method, 11.5% higher than the DLRF network model, and 29.8% higher than the Oracle network model. And the results indicate that the proposed method is robust and requires less computation, which shows the theoretical references and engineering application value for improving the security of the satellite TT&C link.
Encryption authentication is generally adopted to ensure the security of traditional satellite Telemetry Track and Command (TT&C). However, several security limitations remain to be improved such as identity counterfeiting and deception. A satellite TT&C ground station identity recognition method via radio frequency fingerprint is presented, and a lightweight convolutional neural network for satellite platforms is designed. Relevant features of the IQ signal are extracted through the convolution layer in the IQ direction, which converts the two-dimensional data to one dimension. The time-domain structural features of the signal are extracted by using the multi-layer convolution in the time-series direction. Then a maximum pooling layer is developed to reduce the data dimension, ensuring that the original feature information contained in the IQ signal is fully utilized and the computation burden is reduced. Finally, the identification of the satellite TT&C ground station is realized by two full connection layers. Simulation experiments show that the average accuracy of the proposed method for 21 transmitters is 93.8%, which is 39.8% higher than the traditional support vector machine method, 11.5% higher than the DLRF network model, and 29.8% higher than the Oracle network model. And the results indicate that the proposed method is robust and requires less computation, which shows the theoretical references and engineering application value for improving the security of the satellite TT&C link.
2023, 45(7): 2561-2570.
doi: 10.11999/JEIT220706
Abstract:
In complex visual scene, the performance of existing deep convolutional neural network based methods of salient object detection still suffer from the loss of high-frequency visual information and global structure information of the object, which can be attributed to the weakness of convolutional neural network in capability of learning from the data in non-Euclidean space. To solve these problems, an end-to-end multiple graph neural networks collaborative learning framework is proposed, which realizes the cooperative learning process of salient edge features and salient region features. In this learning framework, this paper constructs a dynamic message enhancement graph convolution operator, which captures non-Euclidean space global context structure information by enhancing message transfer between different graph nodes and between different channels within the same graph node. Further, by introducing an attention perception fusion module, the complementary fusion of salient edge information and salient region information is realized, providing complementary clues for the two information mining processes. Finally, by explicitly encoding the salient edge information to guide the feature learning of salient regions, salient regions in complex scenes can be located more accurately. The experiments on four open benchmark datasets show that the proposed method has strong robustness and generalization ability, which make it superior to the current mainstream deep convolutional neural network based salient object detection methods.
In complex visual scene, the performance of existing deep convolutional neural network based methods of salient object detection still suffer from the loss of high-frequency visual information and global structure information of the object, which can be attributed to the weakness of convolutional neural network in capability of learning from the data in non-Euclidean space. To solve these problems, an end-to-end multiple graph neural networks collaborative learning framework is proposed, which realizes the cooperative learning process of salient edge features and salient region features. In this learning framework, this paper constructs a dynamic message enhancement graph convolution operator, which captures non-Euclidean space global context structure information by enhancing message transfer between different graph nodes and between different channels within the same graph node. Further, by introducing an attention perception fusion module, the complementary fusion of salient edge information and salient region information is realized, providing complementary clues for the two information mining processes. Finally, by explicitly encoding the salient edge information to guide the feature learning of salient regions, salient regions in complex scenes can be located more accurately. The experiments on four open benchmark datasets show that the proposed method has strong robustness and generalization ability, which make it superior to the current mainstream deep convolutional neural network based salient object detection methods.
2023, 45(7): 2571-2579.
doi: 10.11999/JEIT220753
Abstract:
Collaborative representation based classifier and its variants exhibit superior recognition performance in the field of pattern recognition. However, their success relies greatly on the balanced distribution of classes, and a highly imbalanced class distribution may seriously affect their effectiveness. To make up for this defect, this paper introduces the regularization term induced by the complemented subspace into the framework of collaborative representation model, which makes the improved regularization model more discriminative. Furthermore, in order to improve the recognition accuracy of the minority classes on imbalanced datasets, a class weight learning algorithm based on the nearest subspace is proposed according to the representation ability of each class of training samples. The algorithm obtains adaptively the weight of each class and can assign greater weights to the minority classes, so that the final classification results are more fair to the minority classes. The proposed model has a closed-form solution, which demonstrates its computational efficiency. Experimental results on authoritative public binary-class and multi-class imbalanced datasets show that the proposed method outperforms significantly other mainstream imbalanced classification algorithms.
Collaborative representation based classifier and its variants exhibit superior recognition performance in the field of pattern recognition. However, their success relies greatly on the balanced distribution of classes, and a highly imbalanced class distribution may seriously affect their effectiveness. To make up for this defect, this paper introduces the regularization term induced by the complemented subspace into the framework of collaborative representation model, which makes the improved regularization model more discriminative. Furthermore, in order to improve the recognition accuracy of the minority classes on imbalanced datasets, a class weight learning algorithm based on the nearest subspace is proposed according to the representation ability of each class of training samples. The algorithm obtains adaptively the weight of each class and can assign greater weights to the minority classes, so that the final classification results are more fair to the minority classes. The proposed model has a closed-form solution, which demonstrates its computational efficiency. Experimental results on authoritative public binary-class and multi-class imbalanced datasets show that the proposed method outperforms significantly other mainstream imbalanced classification algorithms.
2023, 45(7): 2580-2594.
doi: 10.11999/JEIT220700
Abstract:
Nowadays, artificial intelligence is in the era of big data-driven manner. Machine learning algorithms with deep neural networks as the mainstream have achieved great development and achievements. However, data-driven artificial intelligence still faces problems such as the cost of annotating data, the lack of interpretability, and the weak robustness. The Introduction of knowledge such as prior hypothesis, logic rules and physical equations into existing machine learning algorithms will build artificial intelligence approaches powered by both data and knowledge which could promote innovations of computing paradigm. Four types of knowledge (logical knowledge, visual knowledge, laws of physics knowledge and causal knowledge) that can be used to guide artificial intelligence algorithm models are summarized in thus paper, and typical approaches to guide the combination of these knowledge with data-driven models are discussed.
Nowadays, artificial intelligence is in the era of big data-driven manner. Machine learning algorithms with deep neural networks as the mainstream have achieved great development and achievements. However, data-driven artificial intelligence still faces problems such as the cost of annotating data, the lack of interpretability, and the weak robustness. The Introduction of knowledge such as prior hypothesis, logic rules and physical equations into existing machine learning algorithms will build artificial intelligence approaches powered by both data and knowledge which could promote innovations of computing paradigm. Four types of knowledge (logical knowledge, visual knowledge, laws of physics knowledge and causal knowledge) that can be used to guide artificial intelligence algorithm models are summarized in thus paper, and typical approaches to guide the combination of these knowledge with data-driven models are discussed.
2023, 45(7): 2595-2604.
doi: 10.11999/JEIT220711
Abstract:
It is important for the future development of intelligent robots to expand tactile perception ability, which determines the scope of application scenarios for robots. Tactile data collected by tactile sensors are the basis of robotics work, but these data have complex spatio-temporal properties. Spiking neural network has rich spatio-temporal dynamics and event-driven nature. It can better process spatio-temporal information and be applied to artificial intelligence chips to bring higher energy efficiency to robots. To solve the problem of backpropagation failure in the network training process caused by the discreteness of neuron spike activity in the spiking neural network, from the perspective of the dynamic system of the intelligent robot, the spiking activity approximation function is introduced to make the spiking neural network back-propagation gradient descent method effective. The over-fitting problem caused by the small amount of tactile data is alleviated by the regularization methods. Finally, the spiking neural network robot tactile object recognition algorithm SnnTd and SnnTdlc with regularization constraints are proposed. Compared with the classical methods TactileSGNet, Grid-based CNN, MLP and GCN, the SnnTd method tactile object recognition rate is improved by 5.00% over the best method TactileSGNet on EvTouch-Containers dataset, and the SnnTdlc method tactile object recognition rate is improved by 3.16% over the best method TactileSGNet on EvTouch-Objects dataset.
It is important for the future development of intelligent robots to expand tactile perception ability, which determines the scope of application scenarios for robots. Tactile data collected by tactile sensors are the basis of robotics work, but these data have complex spatio-temporal properties. Spiking neural network has rich spatio-temporal dynamics and event-driven nature. It can better process spatio-temporal information and be applied to artificial intelligence chips to bring higher energy efficiency to robots. To solve the problem of backpropagation failure in the network training process caused by the discreteness of neuron spike activity in the spiking neural network, from the perspective of the dynamic system of the intelligent robot, the spiking activity approximation function is introduced to make the spiking neural network back-propagation gradient descent method effective. The over-fitting problem caused by the small amount of tactile data is alleviated by the regularization methods. Finally, the spiking neural network robot tactile object recognition algorithm SnnTd and SnnTdlc with regularization constraints are proposed. Compared with the classical methods TactileSGNet, Grid-based CNN, MLP and GCN, the SnnTd method tactile object recognition rate is improved by 5.00% over the best method TactileSGNet on EvTouch-Containers dataset, and the SnnTdlc method tactile object recognition rate is improved by 3.16% over the best method TactileSGNet on EvTouch-Objects dataset.
2023, 45(7): 2605-2613.
doi: 10.11999/JEIT220738
Abstract:
In the study of sampling and reconstruction of hyperspectral images, global sampling and fixed block sampling do not take into account the complex texture distribution of hyperspectral images, and the use of the same measurement matrix results in poor image reconstruction quality. To solve this problem, an Adaptive Block Compressed Sensing-Image Entropy (ABCS-IE) is presented. In this method, 2-dimensional image entropy is used as a measure of texture details of hyperspectral images. The size of image blocks is adaptively changed according to the texture details distribution of the image. Then, specific sampling values are assigned to different image blocks, and a special measurement matrix is designed to compress the image blocks according to the assigned sampling values, and the sampled measurements are brought into the reconstruction algorithm for reconstruction. The experimental results show that when this method is applied to the compression-aware reconstruction algorithm to sample and reconstruct the hyperspectral image, the visual effect of the reconstructed image is significantly improved. The maximum Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) are obtained. When the sampling rate is 0.4, the PSNR is increased by 2~4 dB and the SSIM is increased by 0.27, the Root Mean Square Error (RMSE) and the information entropy difference (ΔH) are also reduced, indicating that the reconstructed image is closer to the original image. Moreover, the operation time is reduced by 1~1.5 s. It can be seen that this method can make full use of texture features of hyperspectral images and improve effectively the quality of image reconstruction, and reduce the operation time of reconstruction.
In the study of sampling and reconstruction of hyperspectral images, global sampling and fixed block sampling do not take into account the complex texture distribution of hyperspectral images, and the use of the same measurement matrix results in poor image reconstruction quality. To solve this problem, an Adaptive Block Compressed Sensing-Image Entropy (ABCS-IE) is presented. In this method, 2-dimensional image entropy is used as a measure of texture details of hyperspectral images. The size of image blocks is adaptively changed according to the texture details distribution of the image. Then, specific sampling values are assigned to different image blocks, and a special measurement matrix is designed to compress the image blocks according to the assigned sampling values, and the sampled measurements are brought into the reconstruction algorithm for reconstruction. The experimental results show that when this method is applied to the compression-aware reconstruction algorithm to sample and reconstruct the hyperspectral image, the visual effect of the reconstructed image is significantly improved. The maximum Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) are obtained. When the sampling rate is 0.4, the PSNR is increased by 2~4 dB and the SSIM is increased by 0.27, the Root Mean Square Error (RMSE) and the information entropy difference (ΔH) are also reduced, indicating that the reconstructed image is closer to the original image. Moreover, the operation time is reduced by 1~1.5 s. It can be seen that this method can make full use of texture features of hyperspectral images and improve effectively the quality of image reconstruction, and reduce the operation time of reconstruction.
2023, 45(7): 2614-2622.
doi: 10.11999/JEIT220758
Abstract:
Because of the time complexity and space complexity of dynamic gesture data, traditional machine learning algorithms are difficult to extract accurate gesture features; The existing dynamic gesture recognition algorithms have complex network design, large amount of parameters and insufficient gesture feature extraction. To solve the above problems, a multiscale spatiotemporal feature fusion network based on Convolutional vision Transformer(CvT)is proposed. Firstly, the CvT network used in the field of image classification is introduced into the field of dynamic gesture classification. The CvT network is used to extract the spatial features of a single gesture image, and fuse the shallow features and deep features of different spatial scales. Secondly, a multi time scale aggregation module is designed to extract the spatio-temporal features of dynamic gestures. The CvT network is combined with the multi time scale aggregation module to suppress invalid features. Finally, in order to make up for the deficiency of dropout layer in CvT network, r-drop model is applied to multi-scale spatiotemporal feature fusion network. The experimental results on Jester dataset show that the proposed method is superior to the existing dynamic gesture recognition methods in recognition rate, and the recognition rate on Jester dataset reaches 92.26%.
Because of the time complexity and space complexity of dynamic gesture data, traditional machine learning algorithms are difficult to extract accurate gesture features; The existing dynamic gesture recognition algorithms have complex network design, large amount of parameters and insufficient gesture feature extraction. To solve the above problems, a multiscale spatiotemporal feature fusion network based on Convolutional vision Transformer(CvT)is proposed. Firstly, the CvT network used in the field of image classification is introduced into the field of dynamic gesture classification. The CvT network is used to extract the spatial features of a single gesture image, and fuse the shallow features and deep features of different spatial scales. Secondly, a multi time scale aggregation module is designed to extract the spatio-temporal features of dynamic gestures. The CvT network is combined with the multi time scale aggregation module to suppress invalid features. Finally, in order to make up for the deficiency of dropout layer in CvT network, r-drop model is applied to multi-scale spatiotemporal feature fusion network. The experimental results on Jester dataset show that the proposed method is superior to the existing dynamic gesture recognition methods in recognition rate, and the recognition rate on Jester dataset reaches 92.26%.
2023, 45(7): 2623-2633.
doi: 10.11999/JEIT220655
Abstract:
The analysis of pathological images is of great significance for the diagnosis and prognosis of gastric cancer. However, its clinical application still faces challenges such as low consistency of visual reading and large differences in multi-resolution images. To address these issues, a prognostic prediction method of gastric cancer based on ensemble deep learning of pathological images is proposed. First, the pathological images at different resolutions of patient are preprocessed by slicing and filtering. Then, the deep feature extraction and fusion of slices at different resolutions are carried out by using three deep learning methods, i.e., ResNet, MobileNetV3, and EfficientNetV2, respectively, which aims to obtain the single-resolution prediction results of individual classifier at patient level. Finally, a double-level ensemble strategy is used to fuse the prediction results of heterogeneous individual classifiers at different resolutions to obtain the patient-level prognostic prediction results. In the experiment, the pathological images of 250 gastric cancer patients are collected, and the prediction of distant metastases is used as an example for verification. The experimental results show that the prediction accuracy of the proposed method on the test set is 89.10%, the sensitivity is 89.57%, the specificity is 88.61%, and the Matthews correlation coefficient is 78.19%. Compared with the single-model prediction results, the prediction performance of the proposed method has been significantly improved, which can provide an important reference for the treatment and prognosis of gastric cancer patients.
The analysis of pathological images is of great significance for the diagnosis and prognosis of gastric cancer. However, its clinical application still faces challenges such as low consistency of visual reading and large differences in multi-resolution images. To address these issues, a prognostic prediction method of gastric cancer based on ensemble deep learning of pathological images is proposed. First, the pathological images at different resolutions of patient are preprocessed by slicing and filtering. Then, the deep feature extraction and fusion of slices at different resolutions are carried out by using three deep learning methods, i.e., ResNet, MobileNetV3, and EfficientNetV2, respectively, which aims to obtain the single-resolution prediction results of individual classifier at patient level. Finally, a double-level ensemble strategy is used to fuse the prediction results of heterogeneous individual classifiers at different resolutions to obtain the patient-level prognostic prediction results. In the experiment, the pathological images of 250 gastric cancer patients are collected, and the prediction of distant metastases is used as an example for verification. The experimental results show that the prediction accuracy of the proposed method on the test set is 89.10%, the sensitivity is 89.57%, the specificity is 88.61%, and the Matthews correlation coefficient is 78.19%. Compared with the single-model prediction results, the prediction performance of the proposed method has been significantly improved, which can provide an important reference for the treatment and prognosis of gastric cancer patients.
2023, 45(7): 2634-2641.
doi: 10.11999/JEIT220744
Abstract:
Considering the Event Detection Problem (EDP) in the large-scale Wireless Sensor Network (WSN), the conventional methods rely generally on some prior information, which obstacles the actual application. In this paper, a deep learning-based algorithm, named as Alternating Direction Multiplier Method Network (ADMM-Net), is proposed for the EDP. Firstly, the low rank and sparse matrix decomposition is adopted to capture the spatial-temporal correlation of events. After that, the EDP is formulated as a constrained optimization problem and solved by the Alternating Direction Multiplier Method (ADMM). However, the optimization algorithm suffers from low convergence. Besides, the algorithm’s performance relies heavily on the careful selection of prior parameters. By adopting the conception of “unfolding” in deep learning field, a deep learning network which is named ADMM-Net, is proposed for the EDP in this paper. The ADMM-Net is obtained by unfolding the ADMM algorithm. The ADMM-Net is with fixed layers, whose parameters can be trained via supervised learning. No prior information is required. Compared to the conventional methods, the proposed ADMM-Net does not require any prior information while enjoying fast convergence. Simulation results on both synthesis and realistic datasets verify the effectiveness of the proposed ADMM-Net.
Considering the Event Detection Problem (EDP) in the large-scale Wireless Sensor Network (WSN), the conventional methods rely generally on some prior information, which obstacles the actual application. In this paper, a deep learning-based algorithm, named as Alternating Direction Multiplier Method Network (ADMM-Net), is proposed for the EDP. Firstly, the low rank and sparse matrix decomposition is adopted to capture the spatial-temporal correlation of events. After that, the EDP is formulated as a constrained optimization problem and solved by the Alternating Direction Multiplier Method (ADMM). However, the optimization algorithm suffers from low convergence. Besides, the algorithm’s performance relies heavily on the careful selection of prior parameters. By adopting the conception of “unfolding” in deep learning field, a deep learning network which is named ADMM-Net, is proposed for the EDP in this paper. The ADMM-Net is obtained by unfolding the ADMM algorithm. The ADMM-Net is with fixed layers, whose parameters can be trained via supervised learning. No prior information is required. Compared to the conventional methods, the proposed ADMM-Net does not require any prior information while enjoying fast convergence. Simulation results on both synthesis and realistic datasets verify the effectiveness of the proposed ADMM-Net.
2023, 45(7): 2642-2649.
doi: 10.11999/JEIT220725
Abstract:
In view of the underground coal mine environment, which uses mostly infrared cameras to sense the surrounding environment’s temperature, the images formed have the problems of less texture information, more noise, and blurred images. The detection of Underground targets in coal mines using YOLOv5(Ucm-YOLOv5), a neural network for real-time detection of coal mines, is suggested in this document. This network is an improvement on YOLOv5. Firstly, PP-LCNet is used as the backbone network for enhancing the inference speed on the CPU side. Secondly, the Focus module is eliminated, and the shuffle_block module is used to replace the C3 module in YOLOv5, which reduces the computation while removing redundant operations. Finally, the Anchor is optimized while introducing H-swish as the activation function. The experimental results show that Ucm-YOLOv5 has 41% fewer model parameters and an 86% smaller model than YOLOv5. The algorithm has higher detection accuracy in underground coal mines, while the detection speed at the CPU side reaches the real-time detection standard, which meets the working requirements for target detection in underground coal mines.
In view of the underground coal mine environment, which uses mostly infrared cameras to sense the surrounding environment’s temperature, the images formed have the problems of less texture information, more noise, and blurred images. The detection of Underground targets in coal mines using YOLOv5(Ucm-YOLOv5), a neural network for real-time detection of coal mines, is suggested in this document. This network is an improvement on YOLOv5. Firstly, PP-LCNet is used as the backbone network for enhancing the inference speed on the CPU side. Secondly, the Focus module is eliminated, and the shuffle_block module is used to replace the C3 module in YOLOv5, which reduces the computation while removing redundant operations. Finally, the Anchor is optimized while introducing H-swish as the activation function. The experimental results show that Ucm-YOLOv5 has 41% fewer model parameters and an 86% smaller model than YOLOv5. The algorithm has higher detection accuracy in underground coal mines, while the detection speed at the CPU side reaches the real-time detection standard, which meets the working requirements for target detection in underground coal mines.
2023, 45(7): 2650-2658.
doi: 10.11999/JEIT220702
Abstract:
The existing spatio-temporal information based user matching schemes for cross social networks have problems of spatio-temporal information decoupling and feature extraction difficulties, which result in a decrease in matching accuracy. A Deep Learning based User Matching method for Cross social Networks (DLUMCN) is proposed. Firstly, grid mapping at the spatio-temporal scale is carried out on the user sign-in data. The sign-in matrix set is generated, which contains user characteristics. User sign-in map is formed after normalization. Secondly, the convolution is used to generate high-dimensional spatio-temporal feature maps from the user sign-in map. The weight transformation and feature fusion of feature maps are carried out by deep separable convolution. The feature vector is obtained by one-dimensional expansion of feature maps. Finally, the fully connected feedforward network is used to build a classifier and output the user matching score. Experimental results on two sets of datasets of real social networks show that the proposed method has improved matching accuracy and F1-value, compared with the existing related methods. The effectiveness of the proposed method is demonstrated.
The existing spatio-temporal information based user matching schemes for cross social networks have problems of spatio-temporal information decoupling and feature extraction difficulties, which result in a decrease in matching accuracy. A Deep Learning based User Matching method for Cross social Networks (DLUMCN) is proposed. Firstly, grid mapping at the spatio-temporal scale is carried out on the user sign-in data. The sign-in matrix set is generated, which contains user characteristics. User sign-in map is formed after normalization. Secondly, the convolution is used to generate high-dimensional spatio-temporal feature maps from the user sign-in map. The weight transformation and feature fusion of feature maps are carried out by deep separable convolution. The feature vector is obtained by one-dimensional expansion of feature maps. Finally, the fully connected feedforward network is used to build a classifier and output the user matching score. Experimental results on two sets of datasets of real social networks show that the proposed method has improved matching accuracy and F1-value, compared with the existing related methods. The effectiveness of the proposed method is demonstrated.
2023, 45(7): 2659-2666.
doi: 10.11999/JEIT220733
Abstract:
Based on the component characteristics of Hewlett Packard(HP) memristor, the mathematical relationship formula of HP memristor is analyzed. There is an incremental linear relationship between the internal state variables of HP memristor components and the value of memristor. The change of the value of HP memristor can be superimposed under applied voltage, and the conclusion is drawn that HP memristor circuit has linear superposition. The validity and correctness of the above conclusions are verified by PSpice circuit simulation, which provides theoretical analysis support for the use of the superposition theorem in linear circuits containing HP memristors and linear components.
Based on the component characteristics of Hewlett Packard(HP) memristor, the mathematical relationship formula of HP memristor is analyzed. There is an incremental linear relationship between the internal state variables of HP memristor components and the value of memristor. The change of the value of HP memristor can be superimposed under applied voltage, and the conclusion is drawn that HP memristor circuit has linear superposition. The validity and correctness of the above conclusions are verified by PSpice circuit simulation, which provides theoretical analysis support for the use of the superposition theorem in linear circuits containing HP memristors and linear components.
2023, 45(7): 2667-2674.
doi: 10.11999/JEIT220709
Abstract:
Memristor is a very suitable electronic component for synapse of neural network because of its adjustable resistance, memory property and nano size. In order to build a memristor model more consistent with the characteristics of real physical memristors, an improved memristor model is proposed based on existing ones to overcome the problems of boundary locking, positive and negative voltage rate adjustment and the universality of circuit structure. Then combining Pavlov associative memory experiment and Hopfield neural network theory, the character associative memory circuit is designed in this paper. The circuit structure includes mainly input signal module, synaptic array module, activation function module and feedback control module. This circuit can solve the flexibility problem of using resistors as synaptic modules in traditional array modules, and can also realize the self-association function of third-order character blurred images. In addition, the circuit is similar to the convolutional computation module related to deep learning, and provides a theoretical basis for realizing memristor-based intelligent hardware.
Memristor is a very suitable electronic component for synapse of neural network because of its adjustable resistance, memory property and nano size. In order to build a memristor model more consistent with the characteristics of real physical memristors, an improved memristor model is proposed based on existing ones to overcome the problems of boundary locking, positive and negative voltage rate adjustment and the universality of circuit structure. Then combining Pavlov associative memory experiment and Hopfield neural network theory, the character associative memory circuit is designed in this paper. The circuit structure includes mainly input signal module, synaptic array module, activation function module and feedback control module. This circuit can solve the flexibility problem of using resistors as synaptic modules in traditional array modules, and can also realize the self-association function of third-order character blurred images. In addition, the circuit is similar to the convolutional computation module related to deep learning, and provides a theoretical basis for realizing memristor-based intelligent hardware.