Email alert
Current Articles
2024, Volume 46, Issue 12
Display Method:
2024,
46(12):
1-4.
Abstract:
2024,
46(12):
4335-4353.
doi: 10.11999/JEIT240574
Abstract:
With the rapid development of 6G technology and the evolution of the Industrial Internet of Things (IIoT), federated learning has gained significant attention in the industrial sector. This paper explores the development and application potential of federated learning in 6G-driven IIoT, analyzing 6G’s prospects and how its high speed, low latency, and reliability can support data privacy, resource optimization, and intelligent decision-making. First, existing related work is summarized, and the development requirements along with the vision for applying federated learning technology in 6G industrial IoT scenarios are outlined. Based on this, a new paradigm for industrial federated learning, featuring a hierarchical cross-domain architecture, is proposed to integrate 6G and digital twin technologies, enabling ubiquitous, flexible, and layered federated learning. This supports on-demand and reliable distributed intelligent services in typical Industrial IoT scenarios, achieving the integration of Operational and Communication Information Technology (OCIT). Next, the potential research challenges that federated learning might face towards 6G industrial IoT(6G IIoT-FL) are analyzed and summarized, followed by potential solutions or recommendations. Finally, relevant future directions worth attention in this field are highlighted in the study, with the aim of providing insights for subsequent research to some extent.
With the rapid development of 6G technology and the evolution of the Industrial Internet of Things (IIoT), federated learning has gained significant attention in the industrial sector. This paper explores the development and application potential of federated learning in 6G-driven IIoT, analyzing 6G’s prospects and how its high speed, low latency, and reliability can support data privacy, resource optimization, and intelligent decision-making. First, existing related work is summarized, and the development requirements along with the vision for applying federated learning technology in 6G industrial IoT scenarios are outlined. Based on this, a new paradigm for industrial federated learning, featuring a hierarchical cross-domain architecture, is proposed to integrate 6G and digital twin technologies, enabling ubiquitous, flexible, and layered federated learning. This supports on-demand and reliable distributed intelligent services in typical Industrial IoT scenarios, achieving the integration of Operational and Communication Information Technology (OCIT). Next, the potential research challenges that federated learning might face towards 6G industrial IoT(6G IIoT-FL) are analyzed and summarized, followed by potential solutions or recommendations. Finally, relevant future directions worth attention in this field are highlighted in the study, with the aim of providing insights for subsequent research to some extent.
2024,
46(12):
4354-4362.
doi: 10.11999/JEIT231366
Abstract:
To overcome the effect of channel estimation errors on the ineffectiveness of conventional optimal resource allocation algorithms, a robust resource allocation algorithm with imperfect Channel State Information(CSI) is proposed in Multiple-Input Single-Output(MISO) symbiotic radio systems. Considering the constraints of the minimum throughput of users, transmission time, maximum transmit power of the base station, and the reflection coefficients of users, based on bounded channel uncertainties, a robust throughput-maximization resource allocation problem is formulated by jointly optimizing transmission time, beamforming vectors, and reflection coefficients. The original problem is transformed into a convex problem by applying the Lagrange dual theory, the variable substitution, and the alternating optimizing methods. Simulation results verified that the throughput of the proposed algorithm is improved by 11.7% and the outage probability is reduced by 5.31% by comparing it with the non-robust resource allocation algorithm.
To overcome the effect of channel estimation errors on the ineffectiveness of conventional optimal resource allocation algorithms, a robust resource allocation algorithm with imperfect Channel State Information(CSI) is proposed in Multiple-Input Single-Output(MISO) symbiotic radio systems. Considering the constraints of the minimum throughput of users, transmission time, maximum transmit power of the base station, and the reflection coefficients of users, based on bounded channel uncertainties, a robust throughput-maximization resource allocation problem is formulated by jointly optimizing transmission time, beamforming vectors, and reflection coefficients. The original problem is transformed into a convex problem by applying the Lagrange dual theory, the variable substitution, and the alternating optimizing methods. Simulation results verified that the throughput of the proposed algorithm is improved by 11.7% and the outage probability is reduced by 5.31% by comparing it with the non-robust resource allocation algorithm.
2024,
46(12):
4363-4372.
doi: 10.11999/JEIT240407
Abstract:
The application of Deep Reinforcement Learning (DRL) in intelligent driving decision-making is increasingly widespread, as it effectively enhances decision-making capabilities through continuous interaction with the environment. However, DRL faces challenges in practical applications due to low learning efficiency and poor data-sharing security. To address these issues, a Directed Acyclic Graph (DAG)blockchain-assisted deep reinforcement learning Intelligent Driving Strategy Optimization (D-IDSO) algorithm is proposed. First, a dual-layer secure data-sharing architecture based on DAG blockchain is constructed to ensure the efficiency and security of model data sharing. Next, a DRL-based intelligent driving decision model is designed, incorporating a multi-objective reward function that optimizes decision-making by jointly considering safety, comfort, and efficiency. Additionally, an Improved Prioritized Experience Replay with Twin Delayed Deep Deterministic policy gradient (IPER-TD3) method is proposed to enhance training efficiency. Finally, braking and lane-changing scenarios are selected in the CARLA simulation platform to train Connected and Automated Vehicles (CAVs). Experimental results demonstrate that the proposed algorithm significantly improves model training efficiency in intelligent driving scenarios, while ensuring data security and enhancing the safety, comfort, and efficiency of intelligent driving.
The application of Deep Reinforcement Learning (DRL) in intelligent driving decision-making is increasingly widespread, as it effectively enhances decision-making capabilities through continuous interaction with the environment. However, DRL faces challenges in practical applications due to low learning efficiency and poor data-sharing security. To address these issues, a Directed Acyclic Graph (DAG)blockchain-assisted deep reinforcement learning Intelligent Driving Strategy Optimization (D-IDSO) algorithm is proposed. First, a dual-layer secure data-sharing architecture based on DAG blockchain is constructed to ensure the efficiency and security of model data sharing. Next, a DRL-based intelligent driving decision model is designed, incorporating a multi-objective reward function that optimizes decision-making by jointly considering safety, comfort, and efficiency. Additionally, an Improved Prioritized Experience Replay with Twin Delayed Deep Deterministic policy gradient (IPER-TD3) method is proposed to enhance training efficiency. Finally, braking and lane-changing scenarios are selected in the CARLA simulation platform to train Connected and Automated Vehicles (CAVs). Experimental results demonstrate that the proposed algorithm significantly improves model training efficiency in intelligent driving scenarios, while ensuring data security and enhancing the safety, comfort, and efficiency of intelligent driving.
2024,
46(12):
4373-4382.
doi: 10.11999/JEIT240377
Abstract:
The Flying Ad-hoc NETworks (FANETs) are widely used in emergency rescue scenarios due to their high mobility and self-organization advantages. In emergency scenarios, a large number of user paging requests lead to a challenging coordination between the surge in local traffic and the limited spectrum resources, significant channel interference issues in FANETs are resulted from. There is an urgent need to extend the high spectrum utilization advantage of Partially Overlapping Channels (POCs) to emergency scenarios. However, the adjacent channel characteristics of POCs leads to complex interference that is difficult to characterize. Therefore, partial overlapping channel allocation methods in FANETs are studied in this paper. By utilizing geometric prediction to reconstruct time-varying interference graphs and characterizing the POCs interference model with the interference-free minimum channel spacing matrix, a Dynamic Channel Allocation algorithm for POCs based on Upper Confidence Bounds (UCB-DCA) is proposed. This algorithm aims to solve for an approximately optimal channel allocation scheme through distributed decision-making. Simulation results demonstrate that the algorithm achieves a trade-off between network interference and channel switching times, and has good convergence performance.
The Flying Ad-hoc NETworks (FANETs) are widely used in emergency rescue scenarios due to their high mobility and self-organization advantages. In emergency scenarios, a large number of user paging requests lead to a challenging coordination between the surge in local traffic and the limited spectrum resources, significant channel interference issues in FANETs are resulted from. There is an urgent need to extend the high spectrum utilization advantage of Partially Overlapping Channels (POCs) to emergency scenarios. However, the adjacent channel characteristics of POCs leads to complex interference that is difficult to characterize. Therefore, partial overlapping channel allocation methods in FANETs are studied in this paper. By utilizing geometric prediction to reconstruct time-varying interference graphs and characterizing the POCs interference model with the interference-free minimum channel spacing matrix, a Dynamic Channel Allocation algorithm for POCs based on Upper Confidence Bounds (UCB-DCA) is proposed. This algorithm aims to solve for an approximately optimal channel allocation scheme through distributed decision-making. Simulation results demonstrate that the algorithm achieves a trade-off between network interference and channel switching times, and has good convergence performance.
2024,
46(12):
4383-4390.
doi: 10.11999/JEIT240518
Abstract:
Reconfigurable Intelligent Surfaces (RIS) is considered as one of the potential key technologies for 6G mobile communications, which offers advantages such as low cost, low energy consumption, and easy deployment. By integrating RIS technology into marine wireless channels, it has the capability to convert the unpredictable wireless transmission environment into a manageable one. However, current channel models are struggling to accurately depict the unique signal transmission mechanisms of RIS-enabled base station to ship channels in marine communication scenarios, resulting in challenges in achieving a balance between accuracy and complexity for channel characterization and theoretical establishment. Therefore, this paper develops a segmented channel modeling method for near-field RIS-enabled marine communications, and then proposed a multi-domain joint parameterized statistical channel model for RIS-enabled marine communications. This approach focus on addressing the technical bottleneck of existing RIS channel modeling methods that face difficulties in achieving a balance between accuracy and efficiency, ultimately facilitating the rapid development of the 6G mobile communication industry in China.
Reconfigurable Intelligent Surfaces (RIS) is considered as one of the potential key technologies for 6G mobile communications, which offers advantages such as low cost, low energy consumption, and easy deployment. By integrating RIS technology into marine wireless channels, it has the capability to convert the unpredictable wireless transmission environment into a manageable one. However, current channel models are struggling to accurately depict the unique signal transmission mechanisms of RIS-enabled base station to ship channels in marine communication scenarios, resulting in challenges in achieving a balance between accuracy and complexity for channel characterization and theoretical establishment. Therefore, this paper develops a segmented channel modeling method for near-field RIS-enabled marine communications, and then proposed a multi-domain joint parameterized statistical channel model for RIS-enabled marine communications. This approach focus on addressing the technical bottleneck of existing RIS channel modeling methods that face difficulties in achieving a balance between accuracy and efficiency, ultimately facilitating the rapid development of the 6G mobile communication industry in China.
2024,
46(12):
4391-4398.
doi: 10.11999/JEIT240427
Abstract:
Edge computing provides computing resources and caching services at the network edge, effectively reducing execution latency and energy consumption. However, due to user mobility and network randomness, caching services and user tasks frequently migrate between edge servers, increasing system costs. The migration computation model based on pre-caching is constructed and the joint optimization problem of resource allocation, service caching and migration decision-making is investigated. To address this mixed-integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker condition and bisection search iterative method. Additionally, a Joint optimization algorithm for Migration decision-making and Service caching based on a Greedy Strategy (JMSGS) is proposed to obtain the optimal migration and caching decisions. Simulation results show the effectiveness of the proposed algorithm in minimizing the weighted sum of system energy consumption and latency.
Edge computing provides computing resources and caching services at the network edge, effectively reducing execution latency and energy consumption. However, due to user mobility and network randomness, caching services and user tasks frequently migrate between edge servers, increasing system costs. The migration computation model based on pre-caching is constructed and the joint optimization problem of resource allocation, service caching and migration decision-making is investigated. To address this mixed-integer nonlinear programming problem, the original problem is decomposed to optimize the resource allocation using Karush-Kuhn-Tucker condition and bisection search iterative method. Additionally, a Joint optimization algorithm for Migration decision-making and Service caching based on a Greedy Strategy (JMSGS) is proposed to obtain the optimal migration and caching decisions. Simulation results show the effectiveness of the proposed algorithm in minimizing the weighted sum of system energy consumption and latency.
2024,
46(12):
4399-4408.
doi: 10.11999/JEIT240411
Abstract:
It can effectively overcome the limitations of the ground environment, expand the network coverage and provide users with convenient computing services, through constructing the air-ground integrated edge computing network with Unmanned Aerial Vehicle (UAV) as the relay. In this paper, with the objective of maximizing the task completion amount, the joint optimization problem of UAV deployment, user-server association and bandwidth allocation is investigated in the context of the UAV assisted multi-user and multi-server edge computing network. The formulated joint optimization problem contains both continuous and discrete variables, which makes itself hard to solve. To this end, a Block Coordinated Descent (BCD) based iterative algorithm is proposed in this paper, involving the optimization tools such as differential evolution and particle swarm optimization. The original problem is decomposed into three sub-problems with the proposed algorithm, which can be solved independently. The optimal solution of the original problem can be approached through the iteration among these three subproblems. Simulation results show that the proposed algorithm can greatly increase the amount of completed tasks, which outperforms other benchmark algorithms.
It can effectively overcome the limitations of the ground environment, expand the network coverage and provide users with convenient computing services, through constructing the air-ground integrated edge computing network with Unmanned Aerial Vehicle (UAV) as the relay. In this paper, with the objective of maximizing the task completion amount, the joint optimization problem of UAV deployment, user-server association and bandwidth allocation is investigated in the context of the UAV assisted multi-user and multi-server edge computing network. The formulated joint optimization problem contains both continuous and discrete variables, which makes itself hard to solve. To this end, a Block Coordinated Descent (BCD) based iterative algorithm is proposed in this paper, involving the optimization tools such as differential evolution and particle swarm optimization. The original problem is decomposed into three sub-problems with the proposed algorithm, which can be solved independently. The optimal solution of the original problem can be approached through the iteration among these three subproblems. Simulation results show that the proposed algorithm can greatly increase the amount of completed tasks, which outperforms other benchmark algorithms.
2024,
46(12):
4409-4421.
doi: 10.11999/JEIT240505
Abstract:
At the present stage, the satellite subsystems in Space Information Networks (SINs) have their own systems and are separated from each other, which makes the network appear closed and fragmented, forming a severe resource barrier and resulting in weak collaborative application ability of space resources and low network expansion ability. The traditional architecture design adopts the “completely subversive” idea of the current space networks, which greatly increases the difficulty of actual deployment. Therefore, based on the current status of satellite networks, the idea of “upgrading step by step” is adopted to promote the evolution of the existing network architecture, and a dynamic and scalable architecture model is proposed in SINs from the perspective of mission drive, so as to realize the efficient and dynamic sharing of space resources among subsystems and promote the dynamic and efficient aggregation of space resources according to the changes in mission requirements. Firstly, a phased network architecture model is proposed, aiming at compatibility and upgrading of the existing network architecture. Then, the design of the core component coordinator is introduced, including network structure and working protocol, superframe structure, and efficient network resource allocation strategy, to realize the efficient transmission of spatial data. The simulation results show that the proposed network architecture realizes the efficient sharing of network resources and greatly improves the utilization rate of network resources.
At the present stage, the satellite subsystems in Space Information Networks (SINs) have their own systems and are separated from each other, which makes the network appear closed and fragmented, forming a severe resource barrier and resulting in weak collaborative application ability of space resources and low network expansion ability. The traditional architecture design adopts the “completely subversive” idea of the current space networks, which greatly increases the difficulty of actual deployment. Therefore, based on the current status of satellite networks, the idea of “upgrading step by step” is adopted to promote the evolution of the existing network architecture, and a dynamic and scalable architecture model is proposed in SINs from the perspective of mission drive, so as to realize the efficient and dynamic sharing of space resources among subsystems and promote the dynamic and efficient aggregation of space resources according to the changes in mission requirements. Firstly, a phased network architecture model is proposed, aiming at compatibility and upgrading of the existing network architecture. Then, the design of the core component coordinator is introduced, including network structure and working protocol, superframe structure, and efficient network resource allocation strategy, to realize the efficient transmission of spatial data. The simulation results show that the proposed network architecture realizes the efficient sharing of network resources and greatly improves the utilization rate of network resources.
2024,
46(12):
4422-4431.
doi: 10.11999/JEIT240029
Abstract:
Aiming at the problem that current network topology deception methods only make decisions in the spatial dimension without considering how to perform spatio-temporal multi-dimensional topology deception in cloud-native network environments, a multi-stage Flipit game topology deception method with deep reinforcement learning to obfuscate reconnaissance attacks in cloud-native networks. Firstly, the topology deception defense-offense model in cloud-native complex network environments is analyzed. Then, by introducing a discount factor and transition probabilities, a multi-stage game-based network topology deception model based on Flipit is constructed. Furthermore under the premise of analyzing the defense-offense strategies of game models, a topology deception generation method is developed based on deep reinforcement learning to solve the topology deception strategy of multi-stage game models. Finally, through experiments, it is demonstrated that the proposed method can effectively model and analyze the topology deception defense-offense scenarios in cloud-native networks. It is shown that the algorithm has significant advantages compared to other algorithms.
Aiming at the problem that current network topology deception methods only make decisions in the spatial dimension without considering how to perform spatio-temporal multi-dimensional topology deception in cloud-native network environments, a multi-stage Flipit game topology deception method with deep reinforcement learning to obfuscate reconnaissance attacks in cloud-native networks. Firstly, the topology deception defense-offense model in cloud-native complex network environments is analyzed. Then, by introducing a discount factor and transition probabilities, a multi-stage game-based network topology deception model based on Flipit is constructed. Furthermore under the premise of analyzing the defense-offense strategies of game models, a topology deception generation method is developed based on deep reinforcement learning to solve the topology deception strategy of multi-stage game models. Finally, through experiments, it is demonstrated that the proposed method can effectively model and analyze the topology deception defense-offense scenarios in cloud-native networks. It is shown that the algorithm has significant advantages compared to other algorithms.
2024,
46(12):
4432-4440.
doi: 10.11999/JEIT240398
Abstract:
Establishing accurate short-term forecasting models for electrical load is crucial for the stable operation and intelligent advancement of power systems. Traditional methods have not adequately addressed the issues of data volatility and model uncertainty. In this paper, a multi-dimensional adaptive short-term forecasting method for electrical load based on Bayesian Autoformer network is proposed. Specifically, an adaptive feature selection method is designed to capture multi-dimensional features. By capturing multi-scale features and time-frequency localized information, the model is enhanced to handle high volatility and nonlinear features in load data. Subsequently, an adaptive probabilistic forecasting model based on Bayesian Autoformer network is proposed. It captures relationships of significant subsequence features and associated uncertainties in load time series data, and dynamically updates the probability prediction model and parameter distributions through Bayesian optimization. The proposed model is subjected to a series of experimental analyses (comparative analysis, adaptive analysis, robustness analysis) on real load datasets of three different magnitudes (GW, MW, and KW). The model exhibits superior performance in adaptability and accuracy, with average improvements in Root Mean Square Error (RMSE), Pinball Loss, and Continuous Ranked Probability Score (CRPS) of 1.9%, 24.2%, and 4.5%, respectively.
Establishing accurate short-term forecasting models for electrical load is crucial for the stable operation and intelligent advancement of power systems. Traditional methods have not adequately addressed the issues of data volatility and model uncertainty. In this paper, a multi-dimensional adaptive short-term forecasting method for electrical load based on Bayesian Autoformer network is proposed. Specifically, an adaptive feature selection method is designed to capture multi-dimensional features. By capturing multi-scale features and time-frequency localized information, the model is enhanced to handle high volatility and nonlinear features in load data. Subsequently, an adaptive probabilistic forecasting model based on Bayesian Autoformer network is proposed. It captures relationships of significant subsequence features and associated uncertainties in load time series data, and dynamically updates the probability prediction model and parameter distributions through Bayesian optimization. The proposed model is subjected to a series of experimental analyses (comparative analysis, adaptive analysis, robustness analysis) on real load datasets of three different magnitudes (GW, MW, and KW). The model exhibits superior performance in adaptability and accuracy, with average improvements in Root Mean Square Error (RMSE), Pinball Loss, and Continuous Ranked Probability Score (CRPS) of 1.9%, 24.2%, and 4.5%, respectively.
2024,
46(12):
4441-4450.
doi: 10.11999/JEIT240092
Abstract:
Synthetic Aperture Radar (SAR) is a microwave remote sensing imaging radar. In recent years, with the advancement of digital technology and radio frequency electronic technology, the jamming technology of SAR imaging is developed rapidly. The active jamming such as deception jamming based on Digital Radio Frequency Memory (DRFM) technology brings serious challenges to SAR imaging systems for civil use and military use. For research on SAR anti-jamming imaging against deception jamming, firstly, orthogonal waveform diversity design and waveform optimization is carried out for Orthogonal Frequency Division Multiplexing waveforms with Cyclic Prefixes (CP-OFDM). And the CP-OFDM wide band orthogonal waveform set with excellent autocorrelation peak sidelobe level and cross-correlation peak level is obtained. Then the sparse SAR imaging theory is introduced, which is combined with CP-OFDM. By using the sparse reconstruction method, the high-quality and high-precision imaging with anti-jamming capability is realized. Finally, simulation based on point targets, surface targets and real data is conducted, and it is proved that the method can completely remove the false targets generated by deception jamming, suppress sidelobes and achieve high-precision imaging.
Synthetic Aperture Radar (SAR) is a microwave remote sensing imaging radar. In recent years, with the advancement of digital technology and radio frequency electronic technology, the jamming technology of SAR imaging is developed rapidly. The active jamming such as deception jamming based on Digital Radio Frequency Memory (DRFM) technology brings serious challenges to SAR imaging systems for civil use and military use. For research on SAR anti-jamming imaging against deception jamming, firstly, orthogonal waveform diversity design and waveform optimization is carried out for Orthogonal Frequency Division Multiplexing waveforms with Cyclic Prefixes (CP-OFDM). And the CP-OFDM wide band orthogonal waveform set with excellent autocorrelation peak sidelobe level and cross-correlation peak level is obtained. Then the sparse SAR imaging theory is introduced, which is combined with CP-OFDM. By using the sparse reconstruction method, the high-quality and high-precision imaging with anti-jamming capability is realized. Finally, simulation based on point targets, surface targets and real data is conducted, and it is proved that the method can completely remove the false targets generated by deception jamming, suppress sidelobes and achieve high-precision imaging.
2024,
46(12):
4451-4458.
doi: 10.11999/JEIT240289
Abstract:
In the process of trajectory deception against the networked radar using an Unmanned Aerial Vehicle (UAV) cluster, false target points are generated by delaying and forwarding intercepted radar signals. Errors such as radar station location errors, UAV jitter errors, and forwarding delay errors can all cause these false target points to deviate from their intended positions, thereby degrading the effectiveness of the deception. Considering known radar measurement positions, UAV preset positions, deception distances, and a specific Space Resolution Cell (SRC) of the networked radar, the boundary condition of successfully deceiving networked radar by a UAV cluster is analyzed in this paper. The impact patterns of these errors on deception effectiveness are also summarized in the paper. The numerical simulation results show that when all three kinds of errors are present, the derived results can effectively evaluate the deception ability of the UAV cluster to the networked radar.
In the process of trajectory deception against the networked radar using an Unmanned Aerial Vehicle (UAV) cluster, false target points are generated by delaying and forwarding intercepted radar signals. Errors such as radar station location errors, UAV jitter errors, and forwarding delay errors can all cause these false target points to deviate from their intended positions, thereby degrading the effectiveness of the deception. Considering known radar measurement positions, UAV preset positions, deception distances, and a specific Space Resolution Cell (SRC) of the networked radar, the boundary condition of successfully deceiving networked radar by a UAV cluster is analyzed in this paper. The impact patterns of these errors on deception effectiveness are also summarized in the paper. The numerical simulation results show that when all three kinds of errors are present, the derived results can effectively evaluate the deception ability of the UAV cluster to the networked radar.
A SAR Image Aircraft Target Detection and Recognition Network with Target Region Feature Enhancement
2024,
46(12):
4459-4470.
doi: 10.11999/JEIT240491
Abstract:
In Synthetic Aperture Radar (SAR) image aircraft target detection and recognition, the discrete characteristics of aircraft target images and the similarity between structures can reduce the accuracy of aircraft detection and recognition. A SAR image aircraft target detection and recognition network with enhanced target area features is proposed in this paper. The network consists of three parts: Feature Protecting Cross Stage Partial Darknet (FP-CSPDarnet) for protecting aircraft features, Feature Pyramid Net with Adaptive fusion (FPN-A) for adaptive feature fusion, and Detection Head for target area scattering feature extraction and enhancement (D-Head). FP-CSPDarnet can effectively protect the aircraft features in SAR images while extracting features; FPN-A adopts multi-level feature adaptive fusion and refinement to enhance aircraft features; D-Head effectively enhances the identifiable features of the aircraft before detection, improving the accuracy of aircraft detection and recognition. The experimental results using the SAR-ADRD dataset have demonstrated the effectiveness of the proposed method, with an average accuracy improvement of 2.0% compared to the baseline network YOLOv5s.
In Synthetic Aperture Radar (SAR) image aircraft target detection and recognition, the discrete characteristics of aircraft target images and the similarity between structures can reduce the accuracy of aircraft detection and recognition. A SAR image aircraft target detection and recognition network with enhanced target area features is proposed in this paper. The network consists of three parts: Feature Protecting Cross Stage Partial Darknet (FP-CSPDarnet) for protecting aircraft features, Feature Pyramid Net with Adaptive fusion (FPN-A) for adaptive feature fusion, and Detection Head for target area scattering feature extraction and enhancement (D-Head). FP-CSPDarnet can effectively protect the aircraft features in SAR images while extracting features; FPN-A adopts multi-level feature adaptive fusion and refinement to enhance aircraft features; D-Head effectively enhances the identifiable features of the aircraft before detection, improving the accuracy of aircraft detection and recognition. The experimental results using the SAR-ADRD dataset have demonstrated the effectiveness of the proposed method, with an average accuracy improvement of 2.0% compared to the baseline network YOLOv5s.
2024,
46(12):
4471-4482.
doi: 10.11999/JEIT231278
Abstract:
In active electrical scanning millimeter-wave security imaging, the uniform array antenna has the bottleneck of uncontrolled cost and high complexity, which is difficult to be widely applied in practices. To this end, a near-field focused sparse array design algorithm for high sparsity and low sidelobes is proposed in this paper. It applies an improved three dimensional (3D) time-domain imaging algorithm to achieve high-accuracy 3D reconstruction. Firstly, the near-field focusing sparse array antenna model is constructed by taking the near-field focusing position and peak sidelobe level as constraints, where the\begin{document}$ {\ell _p} $\end{document} (0<p<1) norm of the weight vector regularization is established as the objective function. Secondly, by introducing auxiliary variables and establishing equivalent substitution models between sidelobe and focus position constraints and auxiliary variables, the problem of solving the array weight vector in the coupling of the objective function and complex constraints is developed. The model is simplified and solved through the idea of equivalent substitution. Then, the array excitation and position are optimized using a combination of complex number differentiation and heuristic approximation methods. Finally, the Alternating Direction Method of Multipliers (ADMM) is employed to achieve the focus position, peak sidelobe constraint, and array excitation in a cooperative manner. The sparse array 3D imaging is realized by improving the 3D time-domain imaging algorithm. The experimental results show that the proposed method is capable of obtaining lower sidelobe level with fewer array elements under the condition of satisfying the radiation characteristics of array antenna and near-field focusing. Applying raw millimeter-wave data, the advantages of sparse array 3D time-domain imaging algorithm are verified in terms of high accuracy and high efficiency.
In active electrical scanning millimeter-wave security imaging, the uniform array antenna has the bottleneck of uncontrolled cost and high complexity, which is difficult to be widely applied in practices. To this end, a near-field focused sparse array design algorithm for high sparsity and low sidelobes is proposed in this paper. It applies an improved three dimensional (3D) time-domain imaging algorithm to achieve high-accuracy 3D reconstruction. Firstly, the near-field focusing sparse array antenna model is constructed by taking the near-field focusing position and peak sidelobe level as constraints, where the
2024,
46(12):
4483-4492.
doi: 10.11999/JEIT240370
Abstract:
To address the pulse dispersion issue in detecting frequency-shifted chirp signals with traditional Fractional Fourier Transform (FrFT), an adaptive FrFT detection method is proposed in this paper. Leveraging the structural model of short packets and the Neyman-Pearson detection model, an analytical method is derived to evaluate the false alarm probability and missed detection probability of signal frame detection using an evaluation function and a decision threshold. Incorporating the pulse characteristics of traditional FrFT for complete chirp signals, a correction scheme for the fractional Fourier integral operator is proposed, and the peak distribution function of the frequency-shifted chirp symbol is derived for the adaptive FrFT. Addressing the search time shift issue in the adaptive FrFT detection process, the peak size and distribution of the frequency-shifted chirp symbol are analyzed, and the superiority of the adaptive FrFT detection compared to traditional FrFT is demonstrated.
To address the pulse dispersion issue in detecting frequency-shifted chirp signals with traditional Fractional Fourier Transform (FrFT), an adaptive FrFT detection method is proposed in this paper. Leveraging the structural model of short packets and the Neyman-Pearson detection model, an analytical method is derived to evaluate the false alarm probability and missed detection probability of signal frame detection using an evaluation function and a decision threshold. Incorporating the pulse characteristics of traditional FrFT for complete chirp signals, a correction scheme for the fractional Fourier integral operator is proposed, and the peak distribution function of the frequency-shifted chirp symbol is derived for the adaptive FrFT. Addressing the search time shift issue in the adaptive FrFT detection process, the peak size and distribution of the frequency-shifted chirp symbol are analyzed, and the superiority of the adaptive FrFT detection compared to traditional FrFT is demonstrated.
Research on Combined Navigation Algorithm Based on Adaptive Interactive Multi-Kalman Filter Modeling
2024,
46(12):
4493-4503.
doi: 10.11999/JEIT240426
Abstract:
Practical applications struggle to obtain prior knowledge about inertial systems and sensors, affecting information fusion and positioning accuracy in combined navigation systems. To address the degradation of integrated navigation performance due to satellite signal quality changes and system nonlinearity in vehicle navigation, a Fuzzy Adaptive Interactive Multi-Model algorithm based on Multiple Kalman Filters (FAIMM-MKF) is proposed. It integrates a Fuzzy Controller based on satellite signal quality (Fuzzy Controller) and an Adaptive Interactive Multi-Model (AIMM). Improved Kalman filters such as Unscented Kalman Filter (UKF), Iterated Extended Kalman Filter (IEKF), and Square-Root Cubature Kalman Filter (SRCKF) are designed to match vehicle dynamics models. The method’s performance is verified through in-vehicle semi-physical simulation experiments. Results show that the method significantly improves vehicle positioning accuracy in complex environments with varying satellite signal quality compared to traditional interactive multi-model algorithms.
Practical applications struggle to obtain prior knowledge about inertial systems and sensors, affecting information fusion and positioning accuracy in combined navigation systems. To address the degradation of integrated navigation performance due to satellite signal quality changes and system nonlinearity in vehicle navigation, a Fuzzy Adaptive Interactive Multi-Model algorithm based on Multiple Kalman Filters (FAIMM-MKF) is proposed. It integrates a Fuzzy Controller based on satellite signal quality (Fuzzy Controller) and an Adaptive Interactive Multi-Model (AIMM). Improved Kalman filters such as Unscented Kalman Filter (UKF), Iterated Extended Kalman Filter (IEKF), and Square-Root Cubature Kalman Filter (SRCKF) are designed to match vehicle dynamics models. The method’s performance is verified through in-vehicle semi-physical simulation experiments. Results show that the method significantly improves vehicle positioning accuracy in complex environments with varying satellite signal quality compared to traditional interactive multi-model algorithms.
2024,
46(12):
4504-4512.
doi: 10.11999/JEIT240399
Abstract:
Wireless through-the-earth communication provides a solution for information transmission in heavily shielded space. The received current field signal has low Signal-to-Noise Ratio (SNR), is easily distorted, and is greatly affected by carrier frequency offset, making signal acquisition difficult. In this paper, a long synchronization signal frame structure is designed and a two-stage long correlation signal acquisition algorithm is proposed that combines coarse and fine frequency offset estimation. In the first stage, the training symbols in the received time-domain signal are used for coarse estimation of sampling interval deviation based on the maximum likelihood algorithm, and the coarse estimation value of the sampling point compensation interval is calculated. In the second stage, the coarse estimation value and the received SNR are combined to determine the traversal range of the fine estimation value of the sampling point compensation interval. A long correlation template signal with local compensation is designed to achieve accurate acquisition of the current field signal. The algorithm’s performance is verified in a heavily shielded space located 30.26 m below the ground. Experimental results show that compared to traditional sliding correlation algorithms, the proposed algorithm has a higher acquisition success probability.
Wireless through-the-earth communication provides a solution for information transmission in heavily shielded space. The received current field signal has low Signal-to-Noise Ratio (SNR), is easily distorted, and is greatly affected by carrier frequency offset, making signal acquisition difficult. In this paper, a long synchronization signal frame structure is designed and a two-stage long correlation signal acquisition algorithm is proposed that combines coarse and fine frequency offset estimation. In the first stage, the training symbols in the received time-domain signal are used for coarse estimation of sampling interval deviation based on the maximum likelihood algorithm, and the coarse estimation value of the sampling point compensation interval is calculated. In the second stage, the coarse estimation value and the received SNR are combined to determine the traversal range of the fine estimation value of the sampling point compensation interval. A long correlation template signal with local compensation is designed to achieve accurate acquisition of the current field signal. The algorithm’s performance is verified in a heavily shielded space located 30.26 m below the ground. Experimental results show that compared to traditional sliding correlation algorithms, the proposed algorithm has a higher acquisition success probability.
Electromagnetic Sensitivity Analysis of Curved Boundaries under the Method of Accompanying Variables
2024,
46(12):
4513-4521.
doi: 10.11999/JEIT240432
Abstract:
Sensitivity analysis an evaluation method for the influence with variations of the design parameters on electromagnetic performance, which is utilized to calculate sensitivity information. This information guides the analysis of structural models to ensure compliance with design specifications. In the optimization design of electromagnetic structures by commercial software, traditional algorithms are often employed, involving adjustments to the geometry. However, this approach is known to be extensive in terms of computational time and resource consumption. In order to enhance the efficiency of model design, a stable and efficient processing scheme is proposed in the paper, known as the Adjoint Variable Method (AVM). This method achieves estimation of 1st~2nd order sensitivity on parameter transformations with only two algorithmic simulation conditions required. The application of AVM has predominantly been confined to the sensitivity analysis of rectangular boundary parameters, with this paper making the first extension of AVM to the sensitivity analysis of arc boundary parameters. Efficient analysis of the electromagnetic sensitivity of curved structures is accomplished based on the conditions designed for three distinct scenarios: fixed intrinsic parameters, frequency-dependent objective functions, and transient impulse functions. Compared to the Finite-Difference Method (FDM), a significant enhancement in computational efficiency is achieved by the proposed method. The effective implementation of the method substantially expands the application scope of AVM to curved boundaries, which can be utilized in optimization problems such as the electromagnetic structures of plasma models and the edge structures of complex antenna models. When computational resources are limited, the reliability and stability of electromagnetic structure optimization can be ensured by the application of the proposed method.
Sensitivity analysis an evaluation method for the influence with variations of the design parameters on electromagnetic performance, which is utilized to calculate sensitivity information. This information guides the analysis of structural models to ensure compliance with design specifications. In the optimization design of electromagnetic structures by commercial software, traditional algorithms are often employed, involving adjustments to the geometry. However, this approach is known to be extensive in terms of computational time and resource consumption. In order to enhance the efficiency of model design, a stable and efficient processing scheme is proposed in the paper, known as the Adjoint Variable Method (AVM). This method achieves estimation of 1st~2nd order sensitivity on parameter transformations with only two algorithmic simulation conditions required. The application of AVM has predominantly been confined to the sensitivity analysis of rectangular boundary parameters, with this paper making the first extension of AVM to the sensitivity analysis of arc boundary parameters. Efficient analysis of the electromagnetic sensitivity of curved structures is accomplished based on the conditions designed for three distinct scenarios: fixed intrinsic parameters, frequency-dependent objective functions, and transient impulse functions. Compared to the Finite-Difference Method (FDM), a significant enhancement in computational efficiency is achieved by the proposed method. The effective implementation of the method substantially expands the application scope of AVM to curved boundaries, which can be utilized in optimization problems such as the electromagnetic structures of plasma models and the edge structures of complex antenna models. When computational resources are limited, the reliability and stability of electromagnetic structure optimization can be ensured by the application of the proposed method.
2024,
46(12):
4522-4528.
doi: 10.11999/JEIT240417
Abstract:
Convolutional Neural Networks (CNNs) exhibit translation invariance but lack rotation invariance. In recent years, rotating encoding for CNNs becomes a mainstream approach to address this issue, but it requires a significant number of parameters and computational resources. Given that images are the primary focus of computer vision, a model called Offset Angle and Multibranch CNN (OAMC) is proposed to achieve rotation invariance. Firstly, the model detect the offset angle of the input image and rotate it back accordingly. Secondly, feed the rotated image into a multibranch CNN with no rotation encoding. Finally, Response module is used to output the optimal branch as the final prediction of the model. Notably, with a minimal parameter count of 8 k, the model achieves a best classification accuracy of 96.98% on the rotated handwritten numbers dataset. Furthermore, compared to previous research on remote sensing datasets, the model achieves up to 8% improvement in accuracy using only one-third of the parameters of existing models.
Convolutional Neural Networks (CNNs) exhibit translation invariance but lack rotation invariance. In recent years, rotating encoding for CNNs becomes a mainstream approach to address this issue, but it requires a significant number of parameters and computational resources. Given that images are the primary focus of computer vision, a model called Offset Angle and Multibranch CNN (OAMC) is proposed to achieve rotation invariance. Firstly, the model detect the offset angle of the input image and rotate it back accordingly. Secondly, feed the rotated image into a multibranch CNN with no rotation encoding. Finally, Response module is used to output the optimal branch as the final prediction of the model. Notably, with a minimal parameter count of 8 k, the model achieves a best classification accuracy of 96.98% on the rotated handwritten numbers dataset. Furthermore, compared to previous research on remote sensing datasets, the model achieves up to 8% improvement in accuracy using only one-third of the parameters of existing models.
2024,
46(12):
4529-4541.
doi: 10.11999/JEIT240502
Abstract:
To address the issues of significant target scale variation, edge discontinuity, and blurring in 360° omnidirectional images Salient Object Detection (SOD), a method based on the Adjacent Coordination Network (ACoNet) is proposed. First, an adjacent detail fusion module is used to capture detailed and edge information from adjacent features, which facilitates accurate localization of salient objects. Then, a semantic-guided feature aggregation module is employed to aggregate semantic feature information from different scales between shallow and deep features, suppressing the noise transmitted by shallow features. This helps alleviate the problem of discontinuous salient objects and blurred boundaries between the object and background in the decoding stage. Additionally, a multi-scale semantic fusion submodule is constructed to enlarge the receptive field across different convolution layers, thereby achieving better training of the salient object boundaries. Extensive experimental results on two public datasets demonstrate that, compared to 13 other advanced methods, the proposed approach achieves significant improvements in six objective evaluation metrics. Moreover, the subjective visualized detection results show better edge contours and clearer spatial structural details of the salient maps.
To address the issues of significant target scale variation, edge discontinuity, and blurring in 360° omnidirectional images Salient Object Detection (SOD), a method based on the Adjacent Coordination Network (ACoNet) is proposed. First, an adjacent detail fusion module is used to capture detailed and edge information from adjacent features, which facilitates accurate localization of salient objects. Then, a semantic-guided feature aggregation module is employed to aggregate semantic feature information from different scales between shallow and deep features, suppressing the noise transmitted by shallow features. This helps alleviate the problem of discontinuous salient objects and blurred boundaries between the object and background in the decoding stage. Additionally, a multi-scale semantic fusion submodule is constructed to enlarge the receptive field across different convolution layers, thereby achieving better training of the salient object boundaries. Extensive experimental results on two public datasets demonstrate that, compared to 13 other advanced methods, the proposed approach achieves significant improvements in six objective evaluation metrics. Moreover, the subjective visualized detection results show better edge contours and clearer spatial structural details of the salient maps.
2024,
46(12):
4542-4552.
doi: 10.11999/JEIT240087
Abstract:
In order to improve the accuracy of emotion recognition models and solve the problem of insufficient emotional feature extraction, this paper conducts research on bimodal emotion recognition involving audio and facial imagery. In the audio modality, a feature extraction model of a Multi-branch Convolutional Neural Network (MCNN) incorporating a channel-space attention mechanism is proposed, which extracts emotional features from speech spectrograms across time, space, and local feature dimensions. For the facial image modality, a feature extraction model using a Residual Hybrid Convolutional Neural Network (RHCNN) is introduced, which further establishes a parallel attention mechanism that concentrates on global emotional features to enhance recognition accuracy. The emotional features extracted from audio and facial imagery are then classified through separate classification layers, and a decision fusion technique is utilized to amalgamate the classification results. The experimental results indicate that the proposed bimodal fusion model has achieved recognition accuracies of 97.22%, 94.78%, and 96.96% on the RAVDESS, eNTERFACE’05, and RML datasets, respectively. These accuracies signify improvements over single-modality audio recognition by 11.02%, 4.24%, and 8.83%, and single-modality facial image recognition by 4.60%, 6.74%, and 4.10%, respectively. Moreover, the proposed model outperforms related methodologies applied to these datasets in recent years. This illustrates that the advanced bimodal fusion model can effectively focus on emotional information, thereby enhancing the overall accuracy of emotion recognition.
In order to improve the accuracy of emotion recognition models and solve the problem of insufficient emotional feature extraction, this paper conducts research on bimodal emotion recognition involving audio and facial imagery. In the audio modality, a feature extraction model of a Multi-branch Convolutional Neural Network (MCNN) incorporating a channel-space attention mechanism is proposed, which extracts emotional features from speech spectrograms across time, space, and local feature dimensions. For the facial image modality, a feature extraction model using a Residual Hybrid Convolutional Neural Network (RHCNN) is introduced, which further establishes a parallel attention mechanism that concentrates on global emotional features to enhance recognition accuracy. The emotional features extracted from audio and facial imagery are then classified through separate classification layers, and a decision fusion technique is utilized to amalgamate the classification results. The experimental results indicate that the proposed bimodal fusion model has achieved recognition accuracies of 97.22%, 94.78%, and 96.96% on the RAVDESS, eNTERFACE’05, and RML datasets, respectively. These accuracies signify improvements over single-modality audio recognition by 11.02%, 4.24%, and 8.83%, and single-modality facial image recognition by 4.60%, 6.74%, and 4.10%, respectively. Moreover, the proposed model outperforms related methodologies applied to these datasets in recent years. This illustrates that the advanced bimodal fusion model can effectively focus on emotional information, thereby enhancing the overall accuracy of emotion recognition.
2024,
46(12):
4553-4562.
doi: 10.11999/JEIT240428
Abstract:
In the era of big data, table widely exists in various document images, and table detection is of great significance for the reuse of table information. In response to issues such as limited receptive field, reliance on predefined proposals, and inaccurate table boundary localization in existing table detection algorithms based on convolutional neural network, a table detection network based on DINO model is proposed in this paper. Firstly, an image preprocessing method is designed to enhance the corner and line features of table, enabling more precise table boundary localization and effective differentiation between table and other document elements like text. Secondly, a backbone network SwTNet-50 is designed, and Swin Transformer Blocks (STB) are introduced into ResNet to effectively combine local and global features, and the feature extraction ability of the model and the detection accuracy of table boundary are improved. Finally, to address the inadequacies in encoder feature learning in one-to-one matching and insufficient positive sample training in the DINO model, a collaborative hybrid assignments training strategy is adopted to improve the feature learning ability of the encoder and detection precision. Compared with various table detection methods based on deep learning, our model is better than other algorithms on the TNCR table detection dataset, with F1-Scores of 98.2%, 97.4%, and 93.3% for IoU thresholds of 0.5, 0.75, and 0.9, respectively. On the IIIT-AR-13K dataset, the F1-Score is 98.6% when the IoU threshold is 0.5.
In the era of big data, table widely exists in various document images, and table detection is of great significance for the reuse of table information. In response to issues such as limited receptive field, reliance on predefined proposals, and inaccurate table boundary localization in existing table detection algorithms based on convolutional neural network, a table detection network based on DINO model is proposed in this paper. Firstly, an image preprocessing method is designed to enhance the corner and line features of table, enabling more precise table boundary localization and effective differentiation between table and other document elements like text. Secondly, a backbone network SwTNet-50 is designed, and Swin Transformer Blocks (STB) are introduced into ResNet to effectively combine local and global features, and the feature extraction ability of the model and the detection accuracy of table boundary are improved. Finally, to address the inadequacies in encoder feature learning in one-to-one matching and insufficient positive sample training in the DINO model, a collaborative hybrid assignments training strategy is adopted to improve the feature learning ability of the encoder and detection precision. Compared with various table detection methods based on deep learning, our model is better than other algorithms on the TNCR table detection dataset, with F1-Scores of 98.2%, 97.4%, and 93.3% for IoU thresholds of 0.5, 0.75, and 0.9, respectively. On the IIIT-AR-13K dataset, the F1-Score is 98.6% when the IoU threshold is 0.5.
2024,
46(12):
4563-4574.
doi: 10.11999/JEIT240388
Abstract:
With generative adversarial networks have attracted much attention because they provide new ideas for blind super-resolution reconstruction. Considering the problem that the existing methods do not fully consider the low-frequency retention characteristics during image degradation, but use the same processing method for high and low-frequency components, which lacks the effective use of frequency details and is difficult to obtain better reconstruction result, a frequency separation generative adversarial super-resolution reconstruction network based on dense residual and quality assessment is proposed. The idea of frequency separation is adopted by the network to process the high-frequency and low-frequency information of the image separately, so as to improve the ability of capturing high-frequency information and simplify the processing of low-frequency features. The base block in the generator is designed to integrate the spatial feature transformation layer into the dense wide activation residuals, which enhances the ability of deep feature representation while differentiating the local information. In addition, no-reference quality assessment network is designed specifically for super-resolution reconstructed images using Visual Geometry Group (VGG), which provides a new quality assessment loss for the reconstruction network and further improves the visual effect of reconstructed images. The experimental results show that the method has better reconstruction effect on multiple datasets than the current state-of-the-art similar methods. It is thus shown that super-resolution reconstruction using generative adversarial networks with the idea of frequency separation can effectively utilize the image frequency components and improve the reconstruction effect.
With generative adversarial networks have attracted much attention because they provide new ideas for blind super-resolution reconstruction. Considering the problem that the existing methods do not fully consider the low-frequency retention characteristics during image degradation, but use the same processing method for high and low-frequency components, which lacks the effective use of frequency details and is difficult to obtain better reconstruction result, a frequency separation generative adversarial super-resolution reconstruction network based on dense residual and quality assessment is proposed. The idea of frequency separation is adopted by the network to process the high-frequency and low-frequency information of the image separately, so as to improve the ability of capturing high-frequency information and simplify the processing of low-frequency features. The base block in the generator is designed to integrate the spatial feature transformation layer into the dense wide activation residuals, which enhances the ability of deep feature representation while differentiating the local information. In addition, no-reference quality assessment network is designed specifically for super-resolution reconstructed images using Visual Geometry Group (VGG), which provides a new quality assessment loss for the reconstruction network and further improves the visual effect of reconstructed images. The experimental results show that the method has better reconstruction effect on multiple datasets than the current state-of-the-art similar methods. It is thus shown that super-resolution reconstruction using generative adversarial networks with the idea of frequency separation can effectively utilize the image frequency components and improve the reconstruction effect.
2024,
46(12):
4575-4588.
doi: 10.11999/JEIT240299
Abstract:
As Moore’s Law comes to an end, it is more and more difficult to improve the chip manufacturing process, and chiplet technology has been widely adopted to improve the chip performance. However, new design parameters introduced into the chiplet architecture pose significant challenges to the computer architecture simulator. To fully support exploration and evaluation of chiplet architecture, System-level Exploration and Evaluation simulator for Chiplet (SEEChiplet), a framework based on gem5 simulator, is developed in this paper. Firstly, three design parameters concerned about chiplet chip design are summarized in this paper, including: (1) chiplet cache system design; (2) Packaging simulation; (3) Interconnection networks between chiplet. Secondly, in view of the above three design parameters, in this paper: (1) a new private last level cache system is designed and implemented to expand the cache system design space; (2) existing gem5 global directory is modified to adapt to new private Last Level Cache (LLC) system; (3) two common packaging methods of chiplet and inter-chiplet network are modeled. Finally, a chiplet-based processor is simulated with PARSEC 3.0 benchmark program running on it, which proves that SEEChiplet can explore and evaluate the design space of chiplet.
As Moore’s Law comes to an end, it is more and more difficult to improve the chip manufacturing process, and chiplet technology has been widely adopted to improve the chip performance. However, new design parameters introduced into the chiplet architecture pose significant challenges to the computer architecture simulator. To fully support exploration and evaluation of chiplet architecture, System-level Exploration and Evaluation simulator for Chiplet (SEEChiplet), a framework based on gem5 simulator, is developed in this paper. Firstly, three design parameters concerned about chiplet chip design are summarized in this paper, including: (1) chiplet cache system design; (2) Packaging simulation; (3) Interconnection networks between chiplet. Secondly, in view of the above three design parameters, in this paper: (1) a new private last level cache system is designed and implemented to expand the cache system design space; (2) existing gem5 global directory is modified to adapt to new private Last Level Cache (LLC) system; (3) two common packaging methods of chiplet and inter-chiplet network are modeled. Finally, a chiplet-based processor is simulated with PARSEC 3.0 benchmark program running on it, which proves that SEEChiplet can explore and evaluate the design space of chiplet.