Email alert
2019 Vol. 41, No. 1
When the Channel State Information (CSI) is not accurate in heterogeneous networks with simultaneous wireless information and power transfer, to guarantee the security and reliability of information and energy transfer simultaneously, a robust secure transmission scheme based on Artificial Noise (AN)-aided is proposed. Through jointly designing the downlink information beamforming and AN matrix of macrocell base station and femtocell base stations, the potential eavesdroppers will be jammed, and the energy harvesting performance of system can be improved. To obtain the optimal designs, the problem of maximizing the energy harvesting performance of system is modeled under the base station power limit and the outage probability limits of information transfer, energy transfer and confidential information eavesdropped. This modeled problem is non-convex. To address the problem, it is transformed into an equivalent form, which can be processed easily. Then, the Bernstein-type inequality is utilized to deal with the outage probability limits, and it is transformed into a convex problem. Simulation results validate the security and the robustness of the proposed scheme.
This paper presents an approach of combining the existing enhanced inter-cell interference coordination technology and the downlink joint transmission scheme of coordinated multi-point transmission technology to solve the problem of serious cross-layer interference in 5G ultra-dense heterogeneous network. With using tools from stochastic geometry theory, the expressions such as the outage probability, spectrum efficiency and network average ergodic capacity of two-layer ultra-dense heterogeneous network are derived. Simulation results show that the proposed joint interference coordination scheme not only reduces the number of cooperative users compared with the traditional coordinated multi-point transmission technology, but also reduces the outage probability of users by 15% in the network at 0 dB. Compared with the enhanced inter-cell interference coordination technology, when the bias value is 10 dB, the user spectrum efficiency in the extended area is improved to 35% and the average traversal capacity of the entire network is increased by 3.4%.
In order to solve the unreasonable virtual resource allocation caused by the uncertainty of service and delay of information feedback in wireless virtualized networks, an online adaptive virtual resource allocation algorithm proposed based on Auto Regressive Moving Average (ARMA) prediction. Firstly, a cost of virtual networks minimization is studied by jointly allocating the time-frequency resources and buffer space, while guaranteeing the overflow probability of each virtual network. Secondly, considering the different demand of virtual networks to different resources, a resource dynamic scheduling mechanism designed with multiple time scales, in which the reservation strategy of buffer space is realized based on the ARMA’s prediction information in slow time scale and the virtual networks are sorted according to the overflow probability derived by the large deviation principle and dynamically schedules the time-frequency resources in fast time scale, so as to meet the service demand. Simulation results show that the algorithm can effectively reduce the bit loss rate and improve the utilization of physical resources.
As a competitive Non-Orthogonal Multiple Access (NOMA) technique, Sparse Code Multiple Access (SCMA) improves efficiently the system spectral efficiency by combining the high dimensional modulation and sparse spread spectrum. To address the existing issues of SCMA codebook design, an optimization design method for SCMA codebooks is proposed for both Rayleigh fading and Gaussian channels. In the method, by rotating the base constellation and the mother constellation, the minimum Euclidean distance between the projection points of the mother constellation on each dimension, and between the constellation points on the constellations corresponding to each user in the total constellation on a single resource block is maximized in order to improve the performance of the SCMA codebooks over Gaussian channels; On the basis of it, by rotating the constellation of multiple users superimposed on each resource block, the corresponding minimum product distance and the Signal Space Diversity (SSD) order of the users’ constellations are optimized; At last, an additional diversity gain is achieved by using Q-coordinate interleaving technology to improve further the performance over the Rayleigh fading channels. Simulation results show that the performance of the proposed SCMA codebooks outperforms that of the HUAWEI’ SCMA codebooks and Low Density Signature Multiple Access (LDS-MA) in both the Gaussian channels and the Rayleigh fading channels.
In wireless relay networks, random transmission delays among relay nodes will lead to substantial performance degradation, for which delay-tolerant Distributed Linear Convolutive Space-Time Code (DLC-STC) is proposed. However, its diversity gain on fast fading Rayleigh channels is not clear. This paper analyzes the diversity gain of the DLC-STC on fast fading Rayleigh channels. It is shown that the DLC-STC can achieve full asynchronous cooperative diversity order with Maximum Likelihood (ML) receivers on fast fading Rayleigh channels, although it is originally proposed for slow fading channels. The numerical results verify the theoretical analysis and show that MMSE-DFE receivers, can collect the same diversity order as ML receivers on fast fading Rayleigh channels.
Oriented to the high-rapid development of Internet applications, new challenges are encountered by the conventional Routing and Spectrum Assignment (RSA). A new direction for the blocking rate reduction and the Quality of Experience (QoE) assurance is provided to the Elastic Optical Network (EON) integrated by Degraded Service (DS) technology. Due to the inefficiency of spectrum resources and the revenue decline caused by DS, a Mixed Integer Linear Programming (MILP) model is proposed with a joint objective that minimizes both spectrum consumption and the priorities and DS frequency of online services. A dynamic RSA algorithm based on differentiated DS and adaptive modulation is proposed, which considers service-priority differentiation, the adaptive modulation and DS technology. Meanwhile, DS loss function and DS window selection strategy are designed to differentiate service levels, and ideal spectrum location and resource are assigned for the impending blocked services. The network revenue function considering the relationship between spectrum and revenue balance is designed to achieve efficient utilization of spectrum resources, reduce the impact of degradation, and enhance network revenue. The simulation results verify the advantages of the proposed algorithm in terms of blocking rate, network profit, etc.
The Digital Video Broadcasting-Common Scrambling Algorithm (DVB-CSA) is a hybrid symmetric cipher. It is made up of the block cipher encryption and the stream cipher encryption. DVB-CSA is often used to protect MPEG-2 signal streams. This paper focuses on impossible differential cryptanalysis of the block cipher in DVB-CSA called CSA-BC. By exploiting the details of the S-box, a 22-round impossible differential is constructed, which is two rounds more than the previous best result. Furthermore, a 25-round impossible differential attack on CSA-BC is presented, which can recover 24 bit key. For the attack, the data complexity, the computational complexity and the memory complexity are 253.3 chosen plaintexts, 232.5 encryptions and 224 units, respectively. For impossible differential cryptanalysis of CSA-BC, the previous best result can attack 21-round CSA-BC and recover 16 bit key. In terms of the round number and the recovered key, the result significantly improves the previous best result.
In recent years, searchable encryption technology and fine-grained access control attribute encryption is widely used in cloud storage environment. Considering that the existing searchable attribute-based encryption schemes have some flaws: It only support single-keyword search without attribute revocation. The single-keyword search may result in the waste of computing and broadband resources due to the partial retrieval from search results. A verifiable multi-keyword search encryption scheme that supports revocation of attributes is proposed. The scheme allows users to detect the correctness of cloud server search results while supporting the revocation of user attributes in a fine-grained access control structure without updating the key or re-encrypting the ciphertext during revocation stage. The aforementioned scheme is proved by the deterministic linearity hypothesis, and the relevant analysis results indicate that it can resist the attacks of keyword selection and the privacy of keywords in the random oracle model with high computational efficiency and storage effectiveness.
Proxy re-encryption plays an important role for encrypted data sharing and so on in cloud computing. Currently, almost all of the constructions of identity-based proxy re-encryption over lattice are in the random oracle model. According to this problem, an efficient identity-based proxy re-encryption is constructed over lattice in the standard model, where the identity string is just mapped to one vector and getting a shorter secret key for users. The proposed scheme has the properties of bidirectional, multi-use, moreover, it is semantic secure against adaptive chosen identity and chosen plaintext attack based on Learning With Errors (LWE) problems in the standard mode.
The security issue of wireless transmission becomes a significant bottleneck in the development of Internet of Things (IoT). The limited computing capability and hardware configuration of IoT terminals and eavesdroppers equipped with massive Multiple-Input Multiple-Output (MIMO) bring new challenges to physical layer security technology. To solve this problem, a lightweight noise injection scheme is proposed that can combat massive MIMO eavesdropper. Firstly, the proposed noise injection scheme is introduced, along with the corresponding secrecy analysis. Then, the close-formed expression of the throughput is derived based on the proposed scheme. Furthermore, the slot allocation coefficient and power allocation coefficient are optimized. The analytical and simulation results show that the proposed noise injection scheme can achieve the security of private information transmission by designing of the IoT system parameters.
For Network Function Virtualization (NFV) environment, the existing placement methods can not guarantee the mapping cost while optimizing the network delay, a service function chaining optimal placement algorithm is proposed based on the IQGA-Viterbi learning algorithm. In the training process of Hidden Markov Model (HMM) parameters, the traditional Baum-Welch algorithm is easy to fall into the local optimum, so the quantum genetic algorithm is proposed, which can better optimize the model parameters. In each iteration, the improved algorithm maintains the diversity of feasible solutions and expands the scope of the spatial search by replicating the best fitness population with equal proportion, thus improving the accuracy of the model parameters. In the process of solving Hidden Markov chain, to overcome the problem that can not be directly observed for hidden sequences, Viterbi algorithm can solve the implicit sequences exactly and solve the problem of optimal service paths in the directed graph. Experimental results show that the network delay and mapping costs are lower compared with the existing algorithms. In addition, the acceptance ratio of requests is raised.
To solve the problems of low resource utilization rate, high energy consumption and poor user service quality in the existing virtualized Cloud Radio Access Network (C-RAN), a virtual resource allocation mechanism based on energy consumption and delay is proposed. According to the network and traffic characteristics of the virtualized C-RAN, considering the resource constraints and proportional fairness, an energy consumption and delay optimization model is established. Furthermore, a heuristic algorithm is used to allocate resources for different types of virtual C-RAN and user virtual base stations to complete resource global optimization configuration. Simulation results show that the proposed resource allocation mechanism can effectively save energy by 62.99% and reduce the latency by 32.32% while improving the network resource utilization.
Firewall policy is defined as access control rules in Software Definition Network (SDN), and distributing these ACL (Access Control List) rules across the networks, it can improve the quality of service. In order to reduce the number of rules placed in the network, the Heuristic Algorithm of Rules Allocation (HARA) of rule multiplexing and merging is proposed in this paper. Considering TCAM storage space of commodity switches and connected link traffic load of endpoint switches, a mixed integer linear programming model which minimize the number of rules placed in the network is established, and the algorithm solves the rules placement problem of multiple routing unicast sessions of different throughputs. Compared with the nonRM-CP algorithms, simulations show that HARA can save 18% TCAM at most and reduce the bandwidth utilization rate of 13.1% at average.
To reduce the beamforming training cost and network delay, make the best of Beacon and S-CAP sub-period in the existing Terahertz Wireless Personal Access Network (TWPAN) directional MAC protocols, an Adaptive Directional MAC (AD-MAC) protocol for TWPAN is proposed. AD-MAC adaptively uses the entire network cooperative beam training in a static scenario, and makes network nodes quickly respond to beam training frames based on historical information in a dynamic scenario. The reverse listening strategy is used to reduce the collision probability of same sector nodes. The control frame and data frame are transmitted simultaneously in the Beacon and S-CAP slot using time-slot reuse. Theoretical analysis verifies the effectiveness of AD-MAC. Also, simulation results show that, comparing with ENLBT-MAC, AD-MAC reduces about 21.84% of beamforming training cost and 22.70% of the average network delay in static scene, and reduces about 18.7% of beamforming training cost and 13.07% of the average network delay in dynamic scene.
A Novel Matrix Mapping (NMM) method is proposed for the synthesis of sparse rectangular arrays with multiple constraints. Firstly, the sizes of element coordinate matrices are resized to improve the Degree Of Freedom (DOF) of elements by taking account of both placeable number and distributable range of elements. Then, a selection matrix is established to determine which elements should be turned off when the coordinate matrices should be thinned. By establishing two different mapping functions, a NMM method is presented to overcome the drawbacks of existing methods in terms of flexibility and effectiveness. Finally, comparison experiments are conducted to verify the effectiveness of the proposed method. The numerical validation points out that the proposed method outperforms the existing methods in the design of sparse rectangular arrays.
In this paper, two novel Artificial Magnetic Conductor (AMC) structures, based on circular loop patch and substrate, are designed to realize 180° reflection phase difference in a wide frequency band. These two AMCs’ reflection phase property is applied to redirecting the scattering fields of a radar target to reduce its Radar Cross Section (RCS). This method of RCS reduction can be realized by covering with a chessboard surface composed of two proposed AMC structures, so the RCS reduction in a wide frequency band can be achieved as well. Compared with the same-sized metallic surface, this proposed chessboard surface can reduce RCS drastically from 8 to 20 GHz under normally incident waves, and the RCS also can be reduced under obliquely incident waves. Meanwhile, this surface also can be used as antenna. By precisely designing feed network, the metasurface antenna can be designed. This antenna also has a low profile. The simulated impedance matching frequency band is from 9.08 to 10.30 GHz. Excellent agreement is obtained between simulation and measurement for metasurface antenna and chessboard surface. Such method gives a method for integrated design of antenna and metasurface, so the RCS reduction can be achieved, at the same time the radiation properties can be maintained.
To address track-to-track association problem of radar and Electronic Support Measurements (ESM) in the presence of sensor biases and different targets reported by different sensors, an anti-bias track-to-track association algorithm based on track vectors detection is proposed according to the statistical characteristics of Gaussian random vectors. The state estimation decomposition equation is firstly derived in the Modified Polar Coordinates (MPC). The track vectors are obtained by the real state cancellation method. Second, In order to eliminate most non-homologous target tracks, the rough association is performed according to the features of the azimuthal rate and Inverse-Time-to-Go (ITG). Finally, the track-to-track association of radar and ESM is extracted based on track vectors chi-square distribution. The effectiveness of the proposed algorithm are verified by Monte Carlo simulation experiments in the presence of sensor biases, targets densities and detection probabilities.
Considering the limits of fuzzy comprehensive evaluation on quality of early warning radar intelligence in actual training, a method of quality evaluation on radar intelligence based on the theory of asymmetric proximity and multilevel fuzzy comprehensive evaluation is proposed. Through the analysis of the producing, transmission, using environmental factors of early warning radar intelligence, the evaluating metric of quality evaluation on radar intelligence integrated for six classes, that are timely, accuracy, completeness, continuity, objectiveness and so on, and then factor set, weight set, and comment set are established, and the quality of the radar intelligence based on the asymmetric proximity with the fuzzy comprehensive evaluation is carried out. This researching methods and results not only can take comprehensive evaluations of a certain quality of radar intelligence, help for finding out the factors to determine the quality of the radar intelligence, but also can fight for providing certain reference to solve complex environment of radar intelligence of operational effectiveness evaluation problem.
The obvious orbit curvature of Medium Earth Orbit (MEO) results in severe two-dimensional space variance in the received signals. Thus, the focusing of MEO SAR data is still a problem to be solved. Fourth-order polynomial is used to model the range history. Also, an azimuth two-step resampling method is proposed to address the azimuth variance. The azimuth resampling in the time domain can adjust the azimuth chirp rate to be the same, then CS/RMA algorithm can be used to handle the space variance of the RCM. The second-step azimuth resampling can correct the left space variance of the Doppler parameters, including range-azimuth coupled space variance of the azimuth chirp rate, and the higher-order focusing parameters. The proposed method can well address the azimuth space variance of the whole scene, make the conventional frequency-domain focusing algorithms applicable to large scene focusing. Finally, the comparison results obtained by the proposed method and the reference method, validate the effectiveness of the proposed method.
Deep learning based ship detection method has a strict demand for the quantity and quality of the SAR image. It takes a lot of manpower and financial resources to collect the large volume of the image and make the corresponding label. In this paper, based on the existing SAR Ship Detection Dataset (SSDD), the problem of insufficient utilization of the dataset is solved. The algorithm is based on Generative Adversarial Network (GAN) and Online Hard Examples Mining (OHEM). The spatial transformation network is used to transform the feature map to generate the feature map of the ship samples with different sizes and rotation angles. This can improve the adaptability of the detector. OHEM is used to discover and make full use of the difficult sample in the process of backward propagation. The limit of positive and negative proportion of sample in the detection algorithm is removed, and the utilization ratio of the sample is improved. Experiments on the SSDD dataset prove that the above two improvements improve the performance of the detection algorithm by 1.3% and 1.0% respectively, and the combination of the two increases by 2.1%. The above two methods do not rely on the specific detection algorithm, only increase the time in training, and do not increase the amount of calculation in the test. It has very strong generality and practicability.
The abnormal pixels in hyperspectral images are often have the characteristics of low probability and scattered outside the background data cloud. How to automatically detect these abnormal pixels is an important research direction in hyperspectral imagery processing. Classical hyperspectral anomaly detection methods are usually based on statistical perspective. The RXD algorithm which is widely used can give the anomalies distribution directly through the second order statistical feature of the image, but the disadvantage is that it does not take into account the higher order statistics of the image. Anomaly detection algorithm based on Independent Component Analysis (ICA) considers the sensitivity of higher order statistics to outliers, but it needs iteration process to extract abnormal components first. And then the extracted components is used for anomaly detection. A method based on cokurtosis tensor for anomaly detection is proposed. This method does not need to extract anomaly components first. It can directly detect the observed pixels and give the distribution of abnormal pixels. Experiments results on both simulated and real data show that it can detect abnormal pixels while suppressing the background information better. Therefore, it can reduce false alarm rate and improve detection accuracy.
In view of the correction for tropospheric delay is limited by the shortage of sounding data, which leads to the problem that the low correction efficiency, this paper proposes a model named Sa+GPT2w, combining Saastamoinen model with GPT2w model. In this paper, the real-time correction for Zenith Tropospheric Delay (ZTD) over China is realized by using the high-precision meteorological values provided by the GPT2w model, and the results are verified by the measured data. Taking the ZTD in 2015-2017 of International GNSS Service(IGS) as a reference, the accuracy of the Sa+GPT2w model (bias: 1.661 cm, RMS: 4.711 cm) rises by 50.5%, 41.9% and 37.1%, respectively, relative to the Sa+EGNOS, Sa+UNB3m and the Hop+GPT2w models. Moreover, using the ZTD from Global Geodetic Observing System (GGOS) in 2017 as a standard, the Sa+GPT2w model (bias: 1.551 cm, RMS: 4.859 cm) improves the accuracy by 49.5%, 38.5% and 46.8% relative to other three models, respectively. Finally, this paper analyzes the temporal and spatial distribution characteristics of the bias and RMS of the above three models. The results provide a significant reference for the effectiveness of correction for ZTD by using different meteorological models in the research of navigation and atmospheric refraction over China.
At present, microwave radiometers suffer from serious Radio Frequency Interference (RFI), especially in low frequency. In this paper, a radio frequency detection algorithm is proposed for L-band phased array radiometer, which is used to measure the sea surface salinity and soil moisture. First, the L-band phased array radiometer is introduced in briefly. Secondly, the radio frequency detection algorithm is introduced in details, which consists of the raw RFI flag, the RFI first moving–averaged window flag, the RFI second moving–averaged window flag and the expanded RFI flag. Finally, the experimental data obtained by the L-band phased array radiometer is processed with the proposed RFI detection algorithm. The results indicate that the proposed detection RFI algorithm can effectively detect the RFI contaminated abnormal data, and exhibits good detected ability.
Single beacon location algorithm based on additive noise model can not accurately represent the actual characteristics of distance measurement, leading to a problem of model mismatch. A two step location algorithm considering the multiplicative noise characteristics is presented, which combines least squares algorithm and nonlinear fading filter. A range error model in the background of multiplicative noise is established based on the analysis of the effective sound velocity error. The nonlinear fading filtering algorithm with single fading factor under multiplicative noise background is improved by introducing the attenuation factor which increases the track continuity. Using the least squares based pre-location process to solve the problem that the improved algorithm is sensitive to the initial value. The simulation and experimental data show that the location precision of the proposed algorithm is obviously better than the extended Kalman filtering algorithm under the additive noise background.
Recommendation systems can help people make decisions conveniently. However, few studies consider the effect of removing irrelevant noise users and retaining a small number of core users to make recommendations. A new method of core user extraction is proposed based on trust relationship and interest similarity. First, all users trust and interest similarity between pairs are calculated and sorted, then according to the frequency and position weight users travel in the nearest neighbor in the list of two kinds of strategies for the selection of candidate core collection of users. Finally, according to the user’s ability the core users are sieved out. Experimental results show that the core user recommendation effectiveness, and verify that the core of user 20% can reach more than recommended accuracy of 90%, and through the use of core user recommendation the negative effects caused by the attacks on the recommendation system can be resisted.
For the problem of high precision frequency measurement of dynamic signals with high fundamental frequency and small frequency change value in electronic measurement, a method of differential frequency measurement is introduced. A novel dynamic adjustable multi-stage frequency-difference circuit structure is proposed. The fast differential frequency measurement system based on FPGA is used to design the Fast Fourier Transform (FFT) algorithm on the FPGA to realize the data processing function of the system. The simulation and experimental results show that the structure of the multi-stage differential frequency circuit can be designed with high precision frequency, and the result can be obtained when the spectrum analysis is carried out. The system can realize the fast FFT operation. Compared with the MATLAB software platform, the system has obvious advantages in the efficiency of data processing. The structure of the FFT model can be dynamically adjusted to meet the requirements of FFT operation of different scale points, and the system performance index can meet the requirements of data acquisition system.
For qualitative and quantitative complex evaluation problem of electromagnetic environment. This paper proposes a novel electromagnetic environment complex evaluation algorithm based on fast S-transform and time-frequency space model, which can count time-complex, frequency-complex and energy-complex simultaneously. Meanwhile, the computation methods and concept of qualitative and quantitative evaluation degree are introduced. To overcome the limitations of the traditional methods, F-norm and root-mean-square are selected as two important evaluation indicators, which have the advantage in accurate evaluation. Simulation results show that the proposed method is accurate and effective to reflect the intensity degree of electromagnetic interference; Meanwhile, the interference experiment of bus card confirms the correctness of the time-frequency space model. The experimental test results verify the correctness of the mentioned evaluators.
Firefly Algorithm (FA) may suffer from the defect of low convergence accuracy depending on the complexity of the optimization problem. To overcome the drawback, a novel learning strategy named Orthogonal Opposition Based Learning (OOBL) is proposed and integrated into FA. In OOBL, first, the opposite is calculated by the centroid opposition, making full use of the population search experience and avoiding depending on the system of coordinates. Second, the orthogonal opposite candidate solutions are constructed by orthogonal experiment design, combining the useful information from the individual and its opposite. The proposed algorithm is tested on the standard benchmark suite and compared with some recently introduced FA variants. The experimental results verify the effectiveness of OOBL and show the outstanding convergence accuracy of the proposed algorithm on most of the test functions.
Heavy computational burden, or complex training procedure and poor universality caused by the manual setting of the fixed thresholds are the main issues associated with most of the noise image quality evaluation algorithms using domain transformation or machine learning. As an attempt for solution, an improved spatial noisy image quality evaluation algorithm based on the masking effect is presented. Firstly, according to the layer-layer progressive rule based on Hosaka principle, an image is divided into sub-blocks with different sizes that match the frequency distribution of its content, and a masking weight is assigned to each sub-block correspondingly. Then the noise in the image is detected through the pixel gradient information extraction, via a two-step strategy. Following that, the preliminary evaluation value is obtained by using the masking weights to weight the noise pollution index of all the sub-blocks. Finally, the correction and normalization are carried out to generate the whole image quality evaluation parameter——i.e. Modified No-Reference Peak Signal to Noise Ratio (MNRPSNR). Such an algorithm is tested on LIVE and TID2008 image quality assessment database, covering a variety of noise types. The results indicate that compared with the current mainstream evaluation algorithms, it has strong competitiveness, and also has the significant effects in improving the traditional algorithm. Moreover, the high degree of consistency to the human subjective feelings and the applicability to multiple noise types are well demonstrated.
When evaluating the enhancement quality of a whole image set, the existing average score criterion varies inconsistently with different image sets and produces a large evaluation quality fluctuation. Therefore, this paper proposes a consistency enhancement quality assessment criterion in confidence interval for any image set. By setting application parameters and using confidence interval to screen data, the proposed criterion compares the quality score difference before and after enhancing each image, and evaluates the consistency of image quality enhancement, and then calculates the effective value of consistency enhancement quality scores. Among many image enhancement algorithms, the proposed criterion can select the high-reliability enhancement algorithm for a specific application. The experimental results show that the proposed criterion has good subjective and objective consistency and outperforms the existing average score criterion, which provides an evaluation criterion for those image enhancement algorithms applied to any image set.
A improved method is proposed for compensating the distortion created by mismatches in Time-Interleaved Analog-to-Digital Converters (TI ADCs). The error compensation of offset and gain is realized by error parameters, and the error compensation of sampling time is realized by the simplified Lagrange interpolation algorithm. The compensation method is implemented in FPGA with the low complexity of fixed-point algorithm, and the online calibration of multi-channel ADC sampling data is implemented in the TIADC hardware platform. The experimental results show that the proposed method improves the Spurious-Free Dynamic Range (SFDR) of sampling data up to 51 dB in the simulation environment, and optimizes the SFDR up to 45 dB in the process of hardware implementation. Under the premise of maintaining the error estimation precision and compensation effect, this method not only reduces the computational complexity of the algorithm, but also the compensation structure is not limited by the number of TIADC channels.
The deep learning model based on the residual network and the spectrogram is used to recognize infant crying. The corpus has balanced proportion of infant crying and non-crying samples. Finally, through the 5-fold cross validation, compared with three models of Support Vector Machine (SVM), Convolutional Neural Network (CNN) and the cochleagram residual network based on Gammatone filters (GT-Resnet), the spectrogram based residual network gets the best F1-score of 0.9965 and satisfies requirements of real time. It is proved that the spectrogram can react acoustics features intuitively and comprehensively in the recognition of infant crying. The residual network based on spectrogram is a good solution to infant crying recognition problem.
Blind separation performance bound of Paired Carrier Multiple Access (PCMA) mixed signal is a measure of the separability of mixed signals and the performance of the separation algorithm. For the PCMA mixed signal, the spatial mapping of the modulation signal bits and symbols is constructed from the transmit signal model. The maximum likelihood criterion is used to derive the lower bound expression of separation performance independent of the separation algorithm. Numerical results agree well with the Viterbi simulation results under ideal conditions, which verify the rationality of the derived performance boundaries.