Email alert
2020 Vol. 42, No. 1
Some current works on intelligent and connected transportation system are presented, particularly focusing on the state of the art of the framework and key technologies in China or internationally, and the research development in some critical directions are elaborated including external environment perception, autonomous decision of vehicles, control execution and cooperative vehicle infrastructure system. On the basis of analyzing and summarizing the existing literature, the scheme of the future intelligent and connected transportation system and its working principle are described. The future intelligent and connected transportation system have the function of full path planning and precise, and the Real-Time Kinematic (RTK) and Synthetic Aperture Radar (SAR) technologies are used to detect and locate moving or non-moving objects, including those without GPS. And the continuity of the detection signal can be guaranteed in the environment where GPS signals are weak or non-signaled (e.g., tunnel, indoor) and the situation of close-range and non-visual. The Mobile Edge Computing (MEC) theory can also be used in the system to solve the key problems such as low latency and large-scale network access, and the big data, cloud computing, Internet of Things (IoTs) and mobile communication technologies are used to realize the global and networked intelligent and connected transportation system.
With the rapid development of intelligent transportation, vehicle terminals generate a large number of data messages that need to be processed in real time. Competition on limited resources will increase the delay of message processing and energy consumption for terminal equipment. For the equilibrium relationship between delay and energy loss, this paper proposes a content-aware classification offloading algorithm based on Mobile Edge Computing (MEC). Firstly, the security message is prioritized according to the analytic hierarchy process, and then the optimal task unloading model of delay and energy loss is established. The relational model is established by assigning different weight coefficients to delay and energy loss. The Lagrangian relaxation method is used to transform the non-convex problem into a convex problem, which combines the sub-gradient projection method and the greedy algorithm to obtain the feasible solution. The performance evaluation results show that the algorithm improves the message processing delay and energy loss to some extent.
For the problem of vehicle positioning in Vehicular Ad-hoc NETworks (VANETs), in order to improve the positioning accuracy and real-time performance, a high-precision and real-time localization algorithm for automatic driving vehicles is proposed, including two technologies based on Matrix Pencil (MP) and Non-Linear Fitting (NLF), and visual perception. The MP-NLF technology uses joint TOA/AOA estimation to locate vehicles with a single station, and introduces high resolution estimation technology to improve the estimation accuracy. The visual perception based technology completes the localization by extracting the feature information of visual perceptual images in positioning area, carries on the unscented Kalman filter combined with the inertial sensor information to further improve the positioning accuracy. The simulation results show that, compared with the traditional multipath fingerprinting algorithm, the proposed algorithm has better performance even in the case of low Signal-to-Noise Ratio (SNR).
In vehicular networks, high mobility and complicated behaviors of vehicles fully manifest the uniqueness of characteristics of vehicular communications. In such a scenario, the data is generated in real-time, the traffic is distributed unevenly across the city and the communication patterns are revealed in various ways. All these characteristics make a fact that the traditional vehicular network deployment and resource management schemes can not satisfy the diverse quality of service requirements. Therefore, it is urgent to design intelligent heterogeneous vehicular networks with ubiquitous interconnection of "vehicle-person-road-cloud". How to make behavior prediction and assist the diversified and differentiated high-quality communication requirements in vehicular networks by using data analysis is still an open problem. This paper reviews the researches on vehicle behavior analysis, network deployment and access, and resource management, then focuses on the enabling technologies for intelligent vehicular networks. Firstly, by adopting advanced artificial intelligence and data analysis techniques, the spatial and temporal distribution characteristics of vehicle behaviors are explored, and general prediction models for these behaviors are then established. Based on the prediction models, efficient and intelligent network deployments, multiple network access schemes, as well as resource management schemes are completed, meeting the high-capacity and high-efficiency demands of future vehicular networks are designed.
For the problem that the traditional traffic accident risk prediction algorithm can not automatically discriminate data features, and the model expression ability is poor, a traffic accident risk prediction algorithm based on deep learning is proposed. The algorithm firstly extracts multi-dimensional features by using the convolutional neural network established in the edge server for a large amount of traffic data collected in the edge network of vehicles. After normalization, de-equalization and other pre-processing, the new variables are input into the convolutional layer and the pooling layer for training. Finally, based on the output discrimination value of the fully connected layer, the risk of traffic accidents can be predicted by simulation. The simulation results show that the algorithm is validated to predict the risk of traffic accidents, and has lower loss and higher prediction accuracy than the traditional machine learning BP neural network algorithm and Logical Regression algorithm.
The high-speed movement of vehicles inevitably leads to frequent data migration between edge servers and increases communication delay, which brings great challenges to the real-time computing service of edge servers. To solve this problem, a real-time reinforcement learning method based on Deep Q-learning Networks according to vehicle motion Trajectory Process (DQN-TP) is proposed. The proposed algorithm separates the decision-making process from the training process by using two neural networks. The decision neural network obtains the network state in real time according to the vehicle’s movement track and chooses the migration method in the virtual machine migration and task migration. At the same time, the decision neural network uploads the decision records to the memory replay pool in the cloud. The evaluation neural network in the cloud trains with the records in the memory replay pool and periodically updates the parameters to the on-board decision neural network. In this way, training and decision-making can be carried out simultaneously. At last, a large number of simulation experiments show that the proposed algorithm can effectively reduce the latency compared with the existing methods of task migration and virtual machine migration.
Autonomous vehicles are equipped with multiple on-board sensors to achieve self-driving functions. However, a tremendous amount of data is generated by autonomous vehicles, which significantly challenges the real-time task processing. Through multiple-vehicle cooperation, which makes the best of vehicle onboard computing resources, autonomous and cooperative driving becomes a promising candidate to solve the aforementioned problem. In this case, it is vital for autonomous and cooperative driving to form a driving platoon and allocate driving tasks efficiently. In this paper, a more general analytical model is developed based on G/G/1 queueing theory to model the topology of platoons. Next, Support Vector Machine (SVM) method is adopted to classify the “idle” and “busy” categories of the vehicles in the platoon based on their computing load and task processing capacity. Finally, based on the analysis above, an efficient task balancing strategy of platoons in autonomous and cooperative driving called Classification based Greed Balancing Strategy (C-GBS) is proposed, in order to balance the task burden among vehicles and cooperate more efficiently. Extensive simulations demonstrate that the proposed technique can reduce the processing delay of driving tasks in platoons with high computing load, which will improve the processing efficiency in autonomous vehicles.
Securely sharing and publishing location trajectory data relies on support of location privacy protection technology. Prior to the advent of differential privacy, K-anonymity and its derived models provide a means of quantitative assessment of location-trajectory privacy protection. However, its security relies heavily on the background knowledge of the attacker, and the model can not provide perfect privacy protection when a new attack occurs. Differential privacy effectively compensates for the above problems, and it proves the level of privacy protection based on rigorous mathematical theory and is increasingly used in the field of trajectory data privacy publishing. Therefore, the trajectory privacy protection technology based on differential privacy theory is studied and analyzed, and the methods of spatial statistical data publishing are introduced such as location histogram and trajectory histogram, the method of trajectory data set publishing and the model of continuous real-time location release privacy protection. At the same time, the existing methods are compared and analyzed, the key development directions are put forward in the future.
Quantum walks are raised for teleporting qubit or qudit. Based on quantum walk teleportation, an arbitrated quantum signature scheme with quantum walks on regular graphs is suggested, in which the entanglement source does not need preparing ahead. In the initial phase, the secret keys are generated via quantum key distribution system. In the signing phase, the signature for the transmitted message is created by the signer. Teleportation of quantum walks on regular graphs is applied to teleporting encrypted message copy from the signer to the verifier. Concretely, the sender encodes the ciphertext of message copy on coin state. Then two-step quantum walks are performed on the initial system state engendering the necessary entangled state for quantum teleportation, which can be the basis of signature generation and verification. In the verifying phase, the verifier verifies the validity of the completed signature under the aid of an arbitrator. Additionally, the applications of random number and public board deter the verifier’s existential forgery and repudiation attacks before the verifier accepts the true message. Analyses show that the suggested arbitrated quantum signature algorithm satisfies the general two requirements, i.e., impossibility of disavowal from the signer and the verifier and impossibility of forgery from anyone. The discussions demonstrate that the scheme may not prevent disavowal attack from the signer and that the corresponding improvements are presented. The scheme may be realizable because quantum walks have experimentally proven to be implementable in different physical systems.
To improve the efficiency of the triangularization of ideal lattice basis, a fast algorithm for triangularizing an ideal lattice basis is proposed by studying the polynomial structure, which runs in time O(n3log2B), where n is the dimension of the lattice, B is the infinity norm of lattice basis. Based on the algorithm, a deterministic algorithm for computing the Smith Normal Form (SNF) of ideal lattice is given, which has the same time complexity and thus is faster than any previously known algorithms. Moreover, for a special class of ideal lattices, a method to transform such triangular bases into Hermite Normal Form (HNF) faster than previous algorithms will be present.
In radio monitoring and target location applications, the received signals are often affected by complex electromagnetic environment, such as impulsive noise and cochannel interference. Traditional signal processing methods based on second-order statistics often fail to work properly. The signal processing methods based on fractional lower order statistics also encounter difficulties due to their dependence on prior knowledge of signals and noises. In recent years, the theory and method of correntropy and cyclic correntropy signal processing, which are widely concerned in the field of signal processing, are put forward. They are effective technical means to solve the problems of signal analysis and processing, parameter estimation, target location and other applications to complex electromagnetic environment. They promote greatly the development of the theory and application of non-Gaussian and non-stationary signal processing. This paper reviews systematically the basic theory and methods of correntropy and cyclic correntropy signal processing, including the background, definition, properties and characteristics of correntropy and cyclic correntropy, as well as their mathematical and physical meanings. This paper introduces also the applications of correntropy and cyclic correntropy signal processing to many fields, hoping to benefit the research and application of non-Gaussian and non-stationary statistical signal processing.
Automatic Target Recognition(ATR) is an important research area in the field of radar information processing. Because the deep Convolution Neural Network(CNN) does not need to carry out feature engineering and the performance of image classification is superior, it attracts more and more attention in the field of radar automatic target recognition. The application of CNN to radar image processing is reviewed in this paper. Firstly, the related knowledges including the characteristics of the radar image is introduced, and the limitations of traditional radar automatic target recognition methods are pointed out. The principle, composition, development of CNN and the field of computer vision are introduced. Then, the research status of CNN in radar automatic target recognition is provided. The detection and recognition method of SAR image are presented in detail. The challenge of radar automatic target recognition is analyzed. Finally, the new theory and model of convolution neural network, the new imaging technology of radar and the application to complex environments in the future are prospected.
In the complex electromagnetic environment, multipath clutter in passive radar may be nonstationary and has jump characteristics. In order to suppress this kind of non-stationary clutter, a clutter suppression method is proposed based on channel segmentation and smoothing, which combines the Orthogonal Frequency Division Multiplexing (OFDM) modulation of the transmitting signal. First, the temporal domain signal model of the jumping clutter is established. Then it is transformed into subcarrier-domain by using the OFDM structure. After channel estimation of each OFDM symbol and smoothing the segmented channel estimation, the non-stationary clutter can be suppressed by the smoothed channel estimation and reference signal in each segment. Simulation and experiment data show that the proposed method can effectively suppress the non-stationary clutter with jumping characteristic.
Insect radar is the most effective tool for insect migration observation. In order to realize target recognition of insect radar, it is important to study the RCS characteristics of insects. This paper will analyze the static and dynamic Radar Cross Section (RCS) characteristics of insects. Firstly, based on the measured X-band fully-polarimetric RCS data, the static RCS characteristics of insects are analyzed, including the variations of horizontal and vertical polarization RCS with body weight respectively, and the variation of insect polarization pattern with body weight. Secondly, the dielectrics and geometric models currently used to study the RCS characteristics of insects are summarized by electromagnetic simulation. Twelve dielectric models consisting of four dielectrics (including water, spinal cord, dry skin, and chitin and hemolymph mixture) and three geometric models (including equivalent size prolate spheroid, equivalent mass prolate spheroid and triaxial prolate spheroid) are compared, and it be found that the RCS characteristics of equivalent mass prolate spheroid are closest to that of the real insects. Then, the fluctuation characteristics of insect dynamic RCS are analyzed based on the insect echo data measured in field by a Ku-band high-resolution insect radar. The measured insect dynamic RCS fluctuation data are fitted with four classical RCS fluctuation distribution models (χ2, Log-normal, Weibull and Gamma distribution), respectively. It can be seen from the least square error of fitting and goodness of fit test that Gamma distribution gives the best description of the statistical characteristics of insect RCS fluctuations. Finally, the application of insect RCS characteristics to insect orientation, mass and body length measurements for insect radars is summarized.
The current Synthetic Aperture Radar (SAR) target detection methods based on Convolutional Neural Network (CNN) rely on a large amount of slice-level labeled train samples. However, it takes a lot of labor and material resources to label the SAR images at slice-level. Compared to label samples at slice-level, it is easier to label them at image-level. The image-level label indicates whether the image contains the target of interest or not. In this paper, a semi-supervised SAR image target detection method based on CNN is proposed by using a small number of slice-level labeled samples and a large number of image-level labeled samples. The target detection network of this method consists of region proposal network and detection network. Firstly, the target detection network is trained using the slice-level labeled samples. After training convergence, the output slices constitute the candidate region set. Then, the image-level labeled clutter samples are input into the network and then the negative slices of the output are added to the candidate region set. Next, the image-level labeled target samples are input into the network as well. After selecting the positive and negative slices in the output of the network, they are added to the candidate region set. Finally, the detection network is trained using the updated candidate region set. The processes of updating candidate region set and training detection network alternate until convergence. The experimental results based on the measured data demonstrate that the performance of the proposed method is similar to the fully supervised training method using a much larger set of slice-level samples.
A micro-motion gesture recognition method based on multi-channel Frequency Modulated Continuous Wave (FMCW) millimeter wave radar is proposed, and an optimal radar parameter design criterion for feature extraction of micro-motion gestures is presented. The time-frequency analysis process is performed on the radar echo reflected by the hand, and the range Doppler spectrum, the range spectrum, the Doppler spectrum and the horizontal direction angle spectrum of the target are estimated. Then the range-Doppler-time-map feature is designed, range-time-map feature, Doppler-time-map feature, horizontal-angle-time-map feature, and three-joint feature with fixed frame time length are used to characterize the 7 classes micro-motion gestures, respectively. And these gesture features are captured and aligned according to the difference in amplitude and speed of the gesture motion process. Then a five-layer lightweight convolutional neural network is designed to classify the gesture features. The experimental results show that, the range-Doppler-time-map feature designed in this paper characterizes the micro-motion gesture more accurately and has a better generalization ability for untrained test objects compared with other features.
The millimeter wave radar is robust against various environments such as rain, fog, snow. It has huge potentials in applications such as automotive radars, intelligent robots. At the same time, the rapid development of silicon technology improves the cut off frequency of the transistor, which make it possible to implement low cost millimeter wave radar SoCs in silicon. Recently a lot of research is dedicated to improve the performance of the silicon based millimeter wave SoCs from both system level and key building blocks level. The current research status and future trends of the silicon based millimeter wave radar SoCs are reviewed in this paper.
Ground Penetrating Radar (GPR), as a non-destructive technology, has been widely used to detect, locate, and characterize subsurface objects. Example applications include underground utility mapping and bridge deck deterioration assessment. However, manually interpreting the GPR scans to detect buried objects and estimate their positions is time-consuming and labor-intensive. Hence, the automatic detection of targets is necessary for practical application. To this end, this paper discusses the feasibility of using GPR to estimate target positions, and reviews the progress made by domestic and international scholars on automatic hyperbolic signature detection in GPR scans. Thereafter, this paper summarizes and compares the processing methods for target detection. It is concluded that future research should focus on developing deep-learning based method to automatically detect and estimate subsurface features for on-site applications.
Pattern recognition algorithms can discover valuable information from mass data of biomedical images as guide for basic research and clinical application. In recent years, with improvement of the theory and practice of pattern recognition and machine learning, especially the appearance and application of deep learning, the crossing researches among artificial intelligence, pattern recognition, and biomedicine become a hotspot, and achieve many breakthrough successes in related fields. This review introduces briefly the common framework and algorithms of image pattern recognition, summarizes the applications of these algorithms to biomedical image analysis including fluorescence microscopic images, histopathological images, and medical radiological images, and finally analyzes and prospect several potential research directions.
Image thresholding methods based on the rough entropy segment the images without prior information except the images. There are two problems to be considered in the rough entropy based thresholding methods, i.e., measuring the incompleteness of knowledge about an image and granulating the image. In this paper, reciprocal rough entropy, a new form of rough entropy, is defined to measure the incompleteness of the image information. In order to granulate the image effectively, a granule size selection method based on the homogeneity histogram is employed. The proposed reciprocal rough entropy is simple in expression and calculation. The experimental results verify the effectiveness of the proposed algorithm.
Compressed Sensing (CS) theory is one of the most active research fields in electronic information engineering. CS theory overcomes the limits dictated by Nyquist sampling theorem. Compared to the required minimum sampling quantity, CS proves that the original signal can be restored with high probability by fewer measurements, which saves the time cost of data acquisition and processing without losing information features. CS theory can essentially be regarded as a tool for dealing with linear signal recovery problems, so it has obvious advantages in solving inverse problems of signals and images. Image degradation is one of them, and the process of restoring high-quality images is image optimization. In order to promote the academic research and practical application of CS theory, the basic principle of CS is introduced. Based on the previous research, this paper studies on CS-based image optimization technology in three main aspects: denoising, deblurring and super resolution. Finally, the problems and challenges are discussed, and the current trends are analyzed to provide reference and help for future work.
The massive high-dimensional measurements accumulated by distributed control systems bring great computational and modeling complexity to the traditional fault diagnosis algorithms, which fail to take advantage of the higher-order information for online estimation. In view of its powerful ability of representation learning, deep learning based fault diagnosis is extensively studied, both in academia and in industry, making intelligent process control more automated and effective. In this paper, deep learning based fault diagnosis is reviewed and summarized as four parts, i.e., stacked auto-encoder based fault diagnosis, deep belief network based fault diagnosis, convolutional neural network based fault diagnosis, and recurrent neural network based fault diagnosis. Furthermore, some necessity and potential trends, "integrated innovation", "data + knowledge" and "information fusion", are discussed from the view of data preprocessing, network design and decision.
Protograph Low Density Parity Check (P-LDPC) code is widely used in various communication systems. In order to meet the requirements of error correction performance, hardware resource loss and power consumption in different application scenarios, further design optimization of P-LDPC codes is needed. This paper focuses on the properties of Joint Source-Channel Coding (JSCC) system based on Double P-LDPC (DP-LDPC) codes in standard channel environment, the optimization of code design and performance behavior, etc. The design and optimization for the system environment in recent years is summarized. It shows that the design optimization work has significantly improved the system performance, which provides some ideas for the research of Industrial Internet (II)-oriented LDPC code. Finally, the future research work is discussed for the reference and promotion of interested scholars.
In order to reduce the delay of computing tasks and the total cost of the system, Mobile Eedge Computing (MEC) technology is applied to vehicular networks to improve further the service quality. The delay problem of vehicular networks is studied with the consideration of computing resources. In order to improve the performance of the next generation vehicular networks, a multi-platform offloading intelligent resource allocation algorithm is proposed to allocate the computing resources. In the proposed algorithm, the K-Nearest Neighbor (KNN) algorithm is used to select the offloading platform (i.e., cloud computing, mobile edge computing, local computing) for computing tasks. For the computing resource allocation problem and system complexity in non-local computing, reinforcement learning is used to solve the optimization problem of resource allocation in vehicular networks using the mobile edge computing technology. Simulation results demonstrate that compared with the baseline algorithms (i.e., all tasks offload to the local or MEC server), the proposed multi-platform offloading intelligent resource allocation algorithm achieves a significant reduction in latency cost, and the average system cost can be saved by 80%.
Unbalanced load on the edge computing server will seriously affect service capabilities, a task scheduling strategy Reinforced Q-learning-Automatic Intent Picking (RQ-AIP) for edge computing scenarios is proposed. Firstly, the load balance of the entire network is measured based on the load distribution of the server. By combining the reinforcement learning method, the appropriate edge server is matched for the task to meet the resource differentiation needs of sensor node tasks. Then, a mapping relationship between task delay and terminal transmit power is constructed to satisfy the constraints of the physical domain. Combining the social attributes of terminal, the appropriate relay terminal is continuously selected for the task to achieve the load balancing of network by terminal-assisted scheduling. Simulation results show that compared with other load balancing strategies, the proposed strategy can effectively alleviate the load between the edge servers and the traffic of the core network, reduce task processing latency.