Email alert
2020 Vol. 42, No. 2
The security of classical symmetric cryptography is facing severe challenges in quantum environment, which has prompted researchers to explore cryptography algorithms that are secure in both classical and quantum environments. Post-quantum symmetric cryptography research emerges. Research in this field is still at its primary stage and has not formed a complete system. This paper categorizes the existing research results, and introduces the research status from four aspects, including quantum algorithm, cryptographic analysis method, security analysis, provable security. Based on the analysis of the research status, the development trend of post-quantum symmetric cryptography is predicted, which provides reference for the analysis and design of symmetric cryptography in quantum environment.
Generalized Feistel Schemes (GFS) are important components of symmetric ciphers, which have been extensively researched in classical setting. However, the security evaluations of GFS in quantum setting are rather scanty. In this paper, more improved polynomial-time quantum distinguishers are presented on Type-1 GFS in quantum Chosen-Plaintext Attack (qCPA) setting and quantum Chosen-Ciphertext Attack (qCCA) setting. In qCPA setting, new quantum polynomial-time distinguishers are proposed on
Even attacks by quantum computer can be theoretically discovered if utilizing quantum communication protocols. Compared with entangled states, the Continuous Variable (CV) Gaussian coherent state is easier to be prepared. The schemes of quantum communication network based on coherent state will be more economical and practical. A Measurement-Device-Independent (MDI) Cluster state quantum communication network scheme by using coherent state is proposed. Quantum Secret Sharing (QSS) and Quantum Conference (QC) protocols can be implemented in this network. A linear Cluster state scheme is poposed to implement t-out-of-n QSS protocol, a star Cluster state scheme to implement four-user QSS protocol and QC protocol. The entanglement-based CV MDI scheme is used to analyze the relationship between the key rates and transmission distance for each symmetric and asymmetric protocol. The presented schemes provide a concrete reference for establishing CV MDI quantum QSS and QC protocol in quantum networks by using coherent state.
Attribute-Based Group Signature(ABGS) is a new variant of group signature, and it allows group members with certain specific attributes to sign messages on behalf of the whole group anonymously; Once any dispute arises, an opening authority can effectively reveal and track the real identity information of the singer. For the problem that the first lattice-based attribute-based group signature scheme with verifier-local revocation has a long bit-size of group public-key, and thus a low space efficiency, a compact identity-encoding technique which only needs a fixed number of matrices is adopted to encode the identity information of group members, so that the bit-size of group public-key is independent of the number of group members. Moreover, a new Stern-like statistical zero-knowledge proofs protocol is proposed, which can effectively prove the member’s signature privilege, and its revocation-token is bound to a one-way and injective learning with errors function.
Regev introduced the Learning With Errors (LWE) problem in 2005, which has close connections to random linear code decoding and has found wide applications to cryptography, especially to post-quantum cryptography. The LWE problem is originally introduced in random access model, and there are evidences that indicate the hardness of this problem. It is well known that the LWE problem is vulnerable if the attacker is allowed to choose samples. However, to the best of the author’s knowledge, a complete algorithm has not been published. In this paper, the LWE problem in query samples access model is analyzed. The technique is to relate the problem to the hidden number problem, and then Fourier learning method is applied to the list decoding.
Due to the advantages such as the worst-case hardness assumption, lattice-based cryptography is believed to the most promising research direction in post-quantum cryptography. As one of the two main hard problems commonly used in lattice-based cryptography, Learning With Errors (LWE) problem is widely used in constructing numerous cryptosystems. In order to improve the efficiency of lattice-based cryptosystems, Zhang et al. (2019) introduced the Asymmetric LWE (ALWE) problem. In this paper, the relation between the ALWE problem and the standard LWE problem is studied, and it shows that for certain error distributions the two problems are polynomially equivent, which paves the way for constructing secure lattice-based cryptosystems from the ALWE problem.
The lattice-based signature schemes are promising quantum-resistant replacements for classical signature schemes based on number theoretical hard problems. An important approach to construct lattice-based signature is utilizing the Fiat-Shamir transform and rejection sampling techniques. There are two Fiat-Shamir signatures among five lattice signature schemes submitted to the post-quantum project initiated by National Institute of Standards and Technology. One of them is called Dilithium, which is based on Module-Learning-With-Errors (MLWE) problem, it features on its simple design in the signing algorithm by using uniform sampling. The Dilithium is built on the generic lattices, to make the size of public key more compact, Dilithium adopts compression technique. On the other hand, schemes using NTRU lattices outperform schemes using generic lattices in efficiency and parameter sizes. This paper devotes to designing an efficient NTRU variant of Dilithium, by combining the advantage of NTRU and uniform rejection sampling, this scheme enjoys a concise structure and gains performance improvement over other lattice-based Fiat-Shamir signature without using extra compression techniques.
In view of the great success of quantum cryptography in key distribution, people also try to utilize the quantum mechanics to construct many other cryptographic protocols. Anonymous authenticated key exchange is exactly one kind of cryptographic tasks whose practical quantum solution is still awaited so far. To solve this problem, a quantum anonymous authenticated exchange protocol is proposed based on a quantum oblivious key transfer scheme. It not only realizes user anonymity and mutual authentication of the user and server, but also establishes a secure session key between the two parties. Besides, the attacks of the server either fail or can be discriminated with outside eavesdropping (the server is thus caught as a cheater), so the server generally will not cheat at the risk of gaining a bad reputation.
With the integration of information technology such as industrial Internet of Things (IoT), cloud computing and Industrial Control System (ICS), the security of industrial data is at enormous risk. In order to protect the confidentiality and integrity of data in such a complex distributed environment, a communication scheme is proposed based on Attribute-Based Encryption (ABE) algorithm, which integrates data encryption, access control, decryption outsourcing and data verification. In addition, it has the characteristics of constant ciphertext length. Finally, the scheme is analyzed in detail from three aspectsie correctness, security and performance overhead. The simulation results show that the algorithm has the advantage of low decryption overhead.
For the low-rate speech encoding problem, an information hidden algorithm based on the G.723.1 coding standard is proposed. In the pitch prediction coding process, by controlling the search range of the closed-loop pitch period (adaptive codebook), combined with the Random Position Selection (RPS) method and the Matrix Coding Method (MCM), the secret information is embedded, which is implemented in the speech coding process. The adoption of the RPS method reduces the correlation between the carrier code-words, and the adoption of the MCM method reduces the rate of change of the carrier. The experimental results show that the average PESQ (Perceptual Evaluation of Speech Quality) deterioration rate under the algorithm is 1.63%, and the concealment is good.
Considering the problem that the existing superpixel methods are usually unable to set an appropriate number of generated superpixels automatically and unable to adhere to image boundaries effectively, a new superpixel method is proposed in this paper, which utilizes local information to perform multi-level simple linear iterative clustering to generate superpixels. First, original image is initially segmented by Simple Liner Iterative Clustering based on Local Information (LI-SLIC). Then, each superpixel is segmented iteratively until its color standard deviation is lower than a predefined threshold. Finally, the over-segmented superpixels are merged based on the color differences between adjacent superpixels. Experiments on Berkeley, Pascal VOC and 3Dircadb databases, as well as comparison with other methods indicate that the proposed method can adhere to image boundaries more accurately, and can prevent over- and under- segmentations more effectively.
Learning unsupervised representations from multivariate medical signals, such as multi-modality polysomnography and multi-channel electroencephalogram, has gained increasing attention in health informatics. In order to solve the problem that the existing models do not fully incorporate the characteristics of the multivariate-temporal structure of medical signals, an unsupervised multi-Context deep Convolutional AutoEncoder (mCtx-CAE) is proposed in this paper. Firstly, by modifying traditional convolutional neural networks, a multivariate convolutional autoencoder is proposed to extract multivariate context features within signal segments. Secondly, semantic learning is adopted to auto-encode temporal information among signal segments, to further extract temporal context features. Finally, an end-to-end multi-context autoencoder is trained by designing objective function based on shared feature representation. Experimental results conducted on two public benchmark datasets (UCD and CHB-MIT) show that the proposed model outperforms the state-of-the-art unsupervised feature learning methods in different medical tasks, demonstrating the effectiveness of the learned fusional features in clinical settings.
In order to explore the correlation between face and audio in the field of speaker recognition, a novel multimodal Generative Adversarial Network (GAN) is designed to map face features and audio features to a more closely connected common space. Then the Triplet-loss is used to constrain further the relationship between the two modals, with which the intra-class distance of the two modals is narrowed, and the inter-class distance of the two modals is extended. Finally, the cosine distance of the common space features of the two modals is calculated to judge whether the face and the voice are matched, and Softmax is used to recognize the speaker identity. Experimental results show that this method can effectively improve the accuracy of speaker recognition.
Based on the network structure and training methods of the Extreme Learning Machine (ELM), Correntropy-based Fusion Extreme Learning Machine (CF-ELM) is proposed. Considering the problem that the fusion of representation level features is insufficient in most classification methods, the kernel mapping and coefficient weighting are combined to propose a Fusion Extreme Learning Machine (F-ELM), which can effectively fuse the representation level features. On this basis, the Mean Square Error (MSE) loss function is replaced by the correntropy-based loss function. A correntropy-based cycle update formula for training the weight matrices of the F-ELM is derived to enhance classification ability and robustness. Extensive experiments are performed on Caltech 101, MSRC and 15 Scene datasets respectively. The experimental results show that CF-ELM can further fuse the representation level features to improve the classification accuracy.
To solve the problem of inaccurate feature representation caused by indistinctive appearance difference in person re-identification domain, a new Matrix Metric Learning algerithm based on Bidirectional Reference (BRM2L) set is proposed. Firstly, reciprocal-neighbor reference sets in different camera views are respectively constructed by the reciprocal-neighbor scheme. To ensure the robustness of reference sets, the reference sets in different camera views are jointly considered to generate the Bidirectional Reference Set (BRS). With hard samples which are mined by the BRS to represent feature descriptors, accurate appearance difference representations could be obtained. Finally, these representations are utilized to conduct more effective matrix metric learning. Experimental results on several public datasets demonstrate the superiority of the proposed method.
Satellite health monitoring is an important concern for satellite security, for which satellite telemetry data is the only source of data. Therefore, accurate prediction of missing data of satellite telemetry is an important forward-looking approach for satellite health diagnosis. For the high-dimensional structure formed by the satellite multi-component system, multi-instrument and multi-monitoring index, the Tensor Factorization based Prediction (TFP) algorithm for missing telemetry data is proposed. The proposed algorithm surpasses most existing methods, which can only be applied to low-dimensional data or specific dimension. The proposed algorithm makes accurate predictions by modeling the telemetry data as a Tensor to integrally utilize its high-dimensional feature; Computing the component matrixes via Tensor Factorization to reconstruct the Tensor which gives the predictions of the missing data; An efficient optimization algorithm is proposed to implement the related tensor calculations, for which the optimal parameter settings are strictly theoretically deduced. Experiments show that the proposed algorithm has better prediction accuracy than the most existing algorithms.
A full digital feedforward Time-Interleaved Analog-to-Digital Converter (TIADC) time skew calibration algorithm is presented, the time skew estimation adopts the feedforward extraction method of the improved derivative module of time skew function, which can greatly improve the accuracy of skew estimation when the input signal frequency is high. At the same time, the time skew function is based on subtraction, in order to reduce the complexity of skew estimation unit. Finally, the time skew is corrected by using first-order Taylor compensation. The simulation results show that when the input signal is a multi-frequency signal, the Spurious-Free Dynamic Range (SFDR) increases from 48.6 dB to 80.7 dB, after adopting the proposed time skew correction for a 4-channal 14-bit TIADC system. Compared with the traditional feedforward calibration structure based on correlation operation, the effective calibration input signal bandwidth can be increased from 0.19 to 0.39, which greatly increases the application range of the calibration algorithm.
Based on the study of the spur-line, a novel spurs-line structure is proposed. The design of a novel Ultra-WideBand (UWB) power divider is described based on the novel spur line structure for the 2.5~13.2 GHz frequency range. The designed device is compact and has a simple structure and good frequency response in the band. Its return loss insertion is less than –12 dB and its insertion loss is less than 3.5 dB. The equations used for the design are based on the concept of odd-even modes and transmission line analysis. The Beetle Antennae Search (BAS) algorithm is used to improve the efficiency and accuracy of the power divider design. In order to verify the accuracy of the design, a UWB power divider is designed by using material RO4003C as substrate. The results validate the feasibility of the spur line-based design and demonstrat that the BAS algorithm has a shortened running time and improved precision compared to other optimization methods. It can be widely used in UWB power divider design.
Compared with the traditional high-order Finite Difference Time Domain(FDTD) Method, an improved high-order FDTD optimization method is proposed in this paper. This algorithm is based on Ampere’s law of circuits and finds a set of optimal coefficients through computer technology to minimize the global dispersion error of the FDTD method.The simulation of point source radiation with different resolutions shows that this method still has very low phase error in the case of lower resolution. It provides an effective solution to the problem of numerical dispersion in the modeling of large size structures.
In order to study the subtle feature recognition of Identification Foe or Friend (IFF) radiation source signals, this paper proposes an IFF individual recognition method based on ensemble intrinsic time-scale decomposition to solve the problem of insufficient research on individual identification of IFF radiation source in complex noise environment. In this algorithm, the Ensemble Intrinsic Time-scale Decomposition (EITD) is applied to dividing the sampled signals into several practical signal components and obtaining the energy distribution diagram of the IFF radiation source signals in time-frequency domain. Through the texture analysis of time-frequency energy spectrum, the unintentional modulation feature of the radiation source signals is represented by the texture features of the image, which are sent to the Support Vector Machine (SVM) for classification and recognition. Experiments show that the proposed method is more accurate than the Hilbert-Huang Transform (HHT) and Inherent Time scale Decomposition (ITD) based method.
The improvement of time-frequency resolution plays a crucial role in the analysis and reconstruction of multi-component non-stationary signals. For traditional time-frequency analysis methods with fixed window, the time-frequency concentration is low and hardly to distinguish the multi-component signals with fast-varying frequencies. In this paper, by adopting the local information of the signal, an adaptive synchrosqueezing transform is proposed for the signals with fast-varying frequencies. The proposed method is with high time-frequency resolution, superior to existing synchrosqueezing methods, and particularly suitable for multi-component signals with close and fast-varying frequencies. Meanwhile, by using the separability condition, the adaptive window parameters are estimated by local Rényi entropy. Finally, experiments on synthetic and real signals demonstrate the correctness of the proposed method, which is suitable to analyze and recover complex non-stationary signals.
For cases with small samples, the estimated noise subspace obtained from sample covariance matrix deviates from the true one, which results in MUltiple SIgnal Classification (MUSIC) Direction-Of-Arrival (DOA) estimation performance breakdown. To deal with this problem, an iterative algorithm is proposed to improve the MUSIC performance by modifying the signal subspace in this paper. Firstly, the DOAs are roughly estimated based on the noise subspace obtained from sample covariance matrix. Then, considering the sparsity of signals and the low-rank property of steering matrix, a new signal subspace is got from the steering matrix consisting of estimated DOAs and their adjacent angles. Finally, the signal subspace is modified by solving an optimization problem. Simulation results demonstrate the proposed algorithm can improve the subspace estimation accuracy and furtherly improve the MUSIC DOA estimation performance, especially in small sample cases.
For the radio frequency stealth control measure of radar intermittent radiation, the relationship between radiation time ratio and positioning performance is studied which takes cross location with two stations as an example. Firstly, the control method of radar intermittent radiation is analyzed. Then, under the assumption of uniform linear motion of the carrier aircraft, the influence model of radiation time ratio on positioning accuracy is established by using the Cramer-Rao Lower Bound (CRLB). Finally, the solution steps of the model are given and verified by simulation. The simulation results show that different radiation time ratios have different effects on the location performance. When the initial distance is 100 km and the radiation time ratio is less than 0.5, the location convergence time exceeds 10 s, which can effectively reduce the performance of cross location with two stations.
According to the HyperSonic Vehicle (HSV) borne radar platform system, a multi-channel SAR-GMTI clutter suppression method is presented based on hypersonic platform forward squint mode. First, range walk correction and range compression are completed in the time domain, and the distance envelope is aligned simultaneously with phase error compensation. Then, the Doppler extended signal is compressed by three-order azimuth Chirp Fourier Transform (CFT), and the azimuth envelope of the echo is aligned with phase error compensation simultaneously. Next, the Digital Beam Forming (DBF) technology is applied to the range time-azimuth CFT domain by nulling the clutter and its ambiguous components to achieve Space-Time Adaptive Processing (STAP). The stationary clutter and its ambiguous components can be suppressed effectively and the echo signs of the moving target without blurring can be extracted.
A task scheduling algorithm based on value optimization is proposed for phased array radar. Firstly, the schedulability of tracking tasks is obtained through feasibility analysis and selecting operation on the task queue, using the proposed schedulability parameters. Then, a dynamic task value function about the actual execution time is established according to the peak value and value changing slope of tasks. A value optimization model for tracking task scheduling is constructed based on the task value function. Timeliness can be better achieved while adopting this model to assign execution time for tasks. Finally, searching tasks are scheduled using the idle time intervals between tracking tasks which are going to be executed. Simulation results show that proposed algorithm reduces the average time shift ratio, and improves the value achieving ratio compared with the traditional scheduling algorithms.
Dual satellite TDOA/FDOA localization is achieved by the TDOA hyperboloid and FDOA hyperboloid. The accuracy of localization is affected by TDOA/FDOA accuracy. In order to measure accurately the TDOA/FDOA, a method of TDOA/FDOA measurement based on short synthetic aperture is presented. This method improves the measurement accuracy by using a certain length of synthetic aperture. For narrowband signals, the method has the ability to estimate a single satellite Doppler frequency, and the frequency difference can be obtained from the results estimated by the two satellites. For wideband signals, high-precision estimates of frequency differences can be obtained by dual satellite data interference. For short-term stable radar signals, the processing results of STK simulation data confirm the effectiveness of the proposed method.
For the problem of spectrum allocation in the multiplexing of cellular user spectrum resources by Device-to-Device (D2D) communication in heterogeneous networks, a D2D communication resource allocation mechanism based on improved discrete Pigeon-Inspired Optimization(PIO) algorithm is proposed. The user's Quality of Service (QoS) is guaranteed by setting the Signal-to-Interference plus Noise Ratio (SINR) threshold, the transmitting power is set for users by power control algorithms. To allocate resources for D2D users, the Binary discrete PIO based on Motion Weight (MWBPIO) algorithm is used. To ensure the communication quality of edge users, the D2D communication technology and relay technology are used to establish D2D relay links, so then the performance of system can be maximized. Simulation results show that the proposed scheme can effectively suppress the interference caused by the introduction of D2D users in heterogeneous communication systems. Moreover, the proposed scheme can effectively improve the communication quality of edge users, and improve the utilization of spectrum resources and the performance of the system.
For the problem that the classifier is less considered to be combined with the brain's cognitive process in the Brain-Computer Interface (BCI) system, a Chernoff-weighted based classifier integrated frame method is proposed and used in synchronous motor imagery BCI. In the method, the statistic characteristics of ElectroEncephaloGraphy (EEG) are obtained by analyzing in each time point of synchronous BCI, and then the probability model is established to compute the Chernoff error bound, which is adopted as the weight of common classifier to take the discriminant process. The test experiments are based on the datasets from BCI competitions, and the proposed frame method is employed to compose with LDA, SVM, ELM respectively. The experimental results demonstrate that the proposed frame method shows competitive performance compared with other methods.
Ultra-Dense Networks (UDNs) shorten the distance between terminals and nodes, which improve greatly the spectral efficiency and expand the system capacity. But the performance of cell edge users is seriously degraded. Reasonable planning of Virtual Cell (VC) can only reduce the interference of moderate scale UDNs, while the interference of users under overlapped base stations in a virtual cell needs to be solved by cooperative user clusters. A user clustering algorithm with Interference Increment Reduction (IIR) is proposed, which minimizes the sum of intra-cluster interference and ultimately maximizes system sum rate by continuously switching users with maximum interference between clusters. Compared with K-means algorithm, this algorithm, no need of specifying cluster heads, avoids local optimum without increasement of the computation complexity. The simulation results show that the system sum rate, especially the throughput of edge users, can be effectively improved when the network is densely deployed.
Network coding is widely used in wireless multicast networks in recent years due to its high transmission efficiency. To address the low efficiency of automatic retransmission caused by packet loss in wireless multicast network, a new Coding Scheduling strategy based on Arriving Time (CSAT) in virtual queue is proposed. For improving encoding efficiency, virtual queues are used to store packets that are initially generated and not received by all receivers. Considering the stability of the queue, CSAT strategy chooses to send packet from the primary and secondary queue at a certain ratio. Both encoding and non-encoding methods are combined to send in the secondary queue. According to the arrival sequence of packets in the queue, the sending method that makes more packets participate in encoding is selected. Simulation results show that the proposed CSAT not only effectively improves packet transmission efficiency, but also improves network throughput and reduces average wait delay.
For LTE-V2X(Long Term Evolution-Vehicle to Everything) system, the cellular link and the SideLink (SL) are usually unstable in the handover process, and the situation is even deteriorating when the SL is employed to assist the normal handover process. To solve these problems, an SL-assisted joint handover scheme is proposed for vehicles in the network, which mainly includes: joint handover procedure design, signaling design, and the joint handover decision algorithm. Firstly, the SL is established for the vehicles that are about to request for handover. The SL is set up between the pair of vehicles with the best channel quality to ensure the link reliability. Secondly, in order to tackle the perplexing problem of SL being vulnerable in the fast changing radio environment, the joint handover signaling procedure is optimized with respect to two different realistic circumstances. Finally, the vehicle’s moving direction is further included in making the handover decision, thus reducing unnecessary handover operations. Simulation results illustrate that the SL-assisted joint handover scheme can effectively ameliorate the handover success rate and reduce significantly the number of LTE handovers.
To deal with the estimation problem of non-stationary channel in massive Multiple-Input Multiple-Output (MIMO) up-link, the 2D channels’ sparse structure information in temporal-spatial domain is used, to design an iterative channel estimation algorithm based on Dirichlet Process (DP) and Variational Bayesian Inference (VBI), which can improve the accuracy under a lower pilot overhead and computation complexity. On account of that the stationary channel models is not suitable for massive MIMO systems anymore, a non-stationary channel prior model utilizing Dirichlet Process is constructed, which can map the physical spatial correlation channels to a probabilistic channel with the same sparse temporal vector. By applying VBI technology, a channel estimation iteration algorithm with low pilot overhead and complexity is designed. Experiment results show the proposed channel method has a better performance on the estimation accuracy than the state-of-art method, meanwhile it works robustly against the dynamic system key parameters.
In the Non-Orthogonal Multiple Access (NOMA) based cellular network with Vehicle-to-Vehicle (V2V) communication, to mitigate the co-channel interference between V2V users and cellular users as well as the power allocation problem based on the NOMA principle, an energy efficiency dynamic resource allocation algorithm is proposed. Firstly, a stochastic optimization model is established to maximize the energy efficiency by considering subchannel scheduling, power allocation and congestion control, in order to guarantee the delay and reliability of V2V users while satisfying the rate of cellular users. Then, leveraging on the Lyapunov stochastic optimization method, the traffic queues can be stabilized by admitting as much traffic data as possible to avoid network congestion, and the radio resource can be allocated dynamically according to the real-time network traffic and thus a suboptimal subchannel matching algorithm is designed to obtain the user scheduling scheme. Furthermore, the power allocation policy is obtained by utilizing successive convex optimization theory and Lagrange dual decomposition method. Finally, the simulation results show that the proposed algorithm can improve the system energy efficiency and ensure the Quality of Service (QoS) requirements of different users and network stability.
In the Orthogonal Frequency Division Multiplexing (OFDM) system, the receiver often needs to know the channel state information, because the frequency selective fading channel will generate inter-symbol interference in the data transmission. In the case of maritime communication, the method of channel estimation is often needed to detect the channel subjected to the interference of various external factors. In order to improve the estimation performance, the Fast Bayesian Matching Pursuit based on singular-value-decomposition for Optimizing observation matrix (FBMPO) is proposed, which fully considers not only the sparse channel of maritime communication, but also reduces the influence of uncertainty of the unpredictable channel. Computer simulation shows, compared with traditional channel estimation algorithms, the proposed algorithm can effectively improve the accuracy of channel estimation.