Email alert
2013 Vol. 35, No. 7
3D wavenumber domain imaging algorithm takes no approximation of the range course and owns high image reconstruction precision. The cross-track dimensional array length is much shorter than cross-track dimensional imaging swath in downward-looking array 3D SAR. In order to avoid wrapping during FFT operation, the echo signal should be zero-padded to the size of the imaging scene. Extortionate zero-padding operation causes high memory requirements and computation loads. Low zero-padding is required if Region Of Interest (ROI) is only dealted instead of the whole imaging scene which helps to boost the algorithm. In this paper, a method is introduced which accomplishes wave-propagation and along-track dimensional, wave-propagation and cross-track dimensional 2D imaging, then the ROI is determined based on the twice imaging results. Finally, the ROI is processed with 3D wavenumber domain imaging algorithm. The effectiveness of the introduced method is verified with experimental data.
This paper focuses on multi-band fusion imaging. A method based on high-precision parameter estimation of Geometrical Theory of Diffraction (GTD) model is given. It makes use of the phase difference of pole and scattering coefficient between the all-pole model of each sub-band to estimate the incoherent components. The gapped-data amplitude and phase estimation algorithm is adopted to fill up the gapped band. Finally, fusion data is gained by high precision parameter estimation of geometrical theory of diffraction -all-pole model with full-band data. The simulation indicates that the resolution of 1D range profile and 2D ISAR image based on this method is better than that of sub-band. In this way, the effectiveness of the method is verified.
When imaging maneuvering target with Inverse Synthetic Aperture Ladar (ISAL), dispersion and Doppler frequency time-variation exist in the echo signal in range and cross-range direction, respectively. For targets of which the maneuverability can be approximated as a second-order motion, the characteristics of the ISAL echo signal are analyzed, and an imaging algorithm based on FRactional Fourier Tronsform(FRFT-CLEAN) is proposed. FRFT is used for range dispersion elimination. After motion compensation, a method combining FRFT and CLEAN technique (FRFT-CLEAN) is proposed for azimuth imaging. Simulation results demonstrate the validity of this algorithm.
To improve the performance of radar High-Resolution Range Profile (HRRP) target recognition, a new Truncated Stick-Breaking Hidden Markov Model (TSB-HMM) based on time domain feature is proposed. Moreover, a hierarchical classification scheme based on TSB-HMM is employed, which utilizes both time domain feature and power spectral density feature of HRRPs for hierarchical recognition. Experimental results based on measured data show that the TSB-HMM is an effective method for radar HRRP recognition, and the hierarchical classification scheme can largely enhance the average recognition rate. Furthermore, the proposed method can obtain satisfactory recognition performances even with very limited training data.
The texture component of compound Gaussian model determines the non-Gaussian characteristics of clutter, and the uncertainty of the texture component can result to the detection performance degradation of the conventional detectors. In this paper, based on the Bayesian framework, the prior distribution is used to denote the uncertainty of texture component, and the impact of the prior model on the robust detection performance is discussed. Two kinds of prior models are considered: non-informative prior model and the informative prior model. Non-informative prior models include the Jeffery prior model and generalized non-informative prior model, and the Normalized Matched Filter (NMF) is given using these prior models. Conjugate prior distribution is used as informative prior model, and Knowledge Aided NMF (KA-NMF) is given. The structure and threshold of KA-NMF are the function of the parameters of prior model. In this paper, the sensitivity of the detection performance of KA-NMF to the parameters of prior model is analyzed. Further more, the non-informative prior model is used to denote the parameters, and the Hierarchical Bayesian NMF (HB-NMF) is given. The computer simulation and real sea clutter data analysis results show that, the HB-NMF detection performance has no relation with the parameters of prior model, and its robustness and detection performance outperform the KA-NMF and NMF respectively.
Considering application of long time accumulation to the detection and parameter estimation of Linear Frequency Modulated Continuous Wave (LFMCW), a novel method based on the joint window-variant coherent average method and cyclostationary method is proposed to detect LFMCW signal and estimate its parameters. Firstly, the improvement of Signal-to-Noise Ratio (SNR) gain is extracted with window-variant coherent average method based on segmentation coherent. And then, cyclostationary method is utilized to finish the detection and parameters estimation of LFMCW signal. The algorithm realizes long time accumulation with low computational complexity and solves the problem brought by initial time-offset. On the low SNR condition, this method still has better estimation performance. Simulations verify the effectiveness of the method.
This paper describes an ultrafine MultiFunctional Unmanned Aerial Vehicle (UAV) SAR (MFUSAR) developed at the Institute of Electronics, Chinese Academy of Sciences. MFUSAR can operate in Ku and X band and be capable of high resolution, interferometry and full polarimetry. The function and architecture of MFUSAR are described in detail. An effective motion compensation, wideband signal generation and IF receiver, multi-channel balance are proposed for MFUSAR. High quality images formed in the UAV flight tests demonstrate clearly the feasibility of UAV borne multifunctional SAR system.
This paper introduces the mechanism of sea surface wind measurement using Global Navigation Satellite System Reflection (GNSS-R) signal and sea surface scattered signal theoretical model. Then the delay waveform of sea surface scattered signal is analyzed through numerical integration, and the influence of wind speed, wind direction, GNSS satellite elevation angle, receiver height and receiver speed on this delay waveform is discussed in detail. Based on the above analysis, the algorithm of GNSS-R sea surface wind retrieval is proposed. In order to validate the mechanism of GNSS-R measuring sea surface wind and the accuracy of this algorithm, Hurricane Dennis is used to test. The result shows that GNSS-R wind speed is generally consistent with the observational data for the low-moderate wind speed (20 m/s) and is less than the observational data for high wind speed (20 m/s). Finally, the correction model of GNSS-R wind speed is initially proposed on the basis of the experimental results.
Due to the power of GPS signal is weak at the receiver, it is easy for receiver to be disturbed in lower carrier-to-noise power density ratio (C/N0) environment, a design method of vector tracking loop is proposed, that jamming signals are suppressed in the tracing loop. Actual pseudorange error and pseudorange-rate error are considered as statements, measured pseudorange errors and pseudorange-rate errors are used as measurement information for structure filter. After receiver position is updated, line-of-sight vectors between the receiver and visible satellites can be calculated, then tracking feedback loop is accomplished. The anti-jamming performance of vector tracking loop is analyzed through comparing the predicted pseudorange variance with scalar loop. Actual satellite ephemeris is collected for simulation experiment, the results show that the predicted pseudorange variance of vector tracking loop is lower and the anti-jamming performance is better than the scalar loop.
An Extended Kalman Particle Filter (EKPF) integrated with negative information (scans or dwells with no measurements) is implemented for target tracking in the Stand-Off Jammer (SOJ). In the EKPF, the Gaussian sum likelihood function which is derived from a sensor model accounting for both the positive information and negative information is directly used in the weight update of the particle filter. And the importance density function is generated by using the Extended Kalman Filter (EKF) to take full account of the current measurement, thus leading to the distribution of the particles approaching the posterior probability density function. Moreover, use of a small number of particles can achieve good tracking accuracy. Simulation results show that EKPF outperforms the EKF implementation in terms of track continuity and track accuracy but at the cost of large computation complexity.
An adaptive Kalman filtering algorithm based on variational Bayesian learning is suggested to cope with the problem in which colored and time-varying measurement noise is introduced. By use of differencing, the model is converted back to a normal model in which measurement noise is white but correlated with process noise. Kalman filtering is modified owing to the correlation and variational Bayesian learning is combined to jointly estimate the measurement noise and the state in a recursive manner. The simulation results demonstrate that this adaptive algorithm is capable of tracking time-varying noise and provides more accurate state estimation than standard Kalman filtering with colored and time-varying noise.
A radio extraction algorithm of steady state based on the high order spectrum and time-domain analysis is presented for the radio fingerprint features extraction problem on the condition of steady-state operation. Firstly, the mathematical model of steady-state operation is constructed and the disadvantages of existing bispectrum feature extraction algorithm are analyzed. Secondly, combining the periodicity of rectangular integral bispectrum with the time-domain analysis, a modified radio extraction algorithm of steady state is proposed, whose applicability for any kind of high order spectral features extraction is also proved. Finally, the results of computation experiment with measured data verify the good efficiency and reliability of the proposed algorithm. By comparison with the traditional rectangular integral bispectrum algorithm, the proposed algorithm improves the accuracy rate from 90% to 97%. Moreover, in the case of the same accuracy rate, the proposed algorithm obtains a higher efficiency than the traditional one.
The framework of Finite Rate of Innovation (FRI) is an effective theory for sampling and reconstruction of non-bandlimited signals. However, the method may be invalid when the pulse shape is complex, since it requires that the frequency spectrum of pulse must meet some special requirements. In this paper, a FRI sampling system based on exponential reproducing kernels is introduced. It has an outstanding advantage that the adaptability for pulse shape can be increased by adjusting the parameters of sampling kernel. On the other side, from implement and stability points of view, the parameters are constrained by extra factors, which are detailedly analyzed, and the guidance of choosing right parameters is given out. Simulations are conducted on hybrid LFM and phase code pulse streams, and the results verify the validity of the proposed method.
Optimization colorization methods base on a specific local relation model, it can not afford to describe global characteristics such as direct connection between visual result and user-provided scribbles. Hence a general frame is put forward, and then it is proved that two seemingly different methods are the same case of it, therefore they are equivalent. Next, it is derived that colorization result can be synthesized a weighted sum of chrominance of all kinds of scribbles where coefficients are all nonnegative and normalized and it leads to the conclusion that optimization-based colorization is an exact process of chrominance blending, and moreover, equivalence between individual chrominance blending weight and the pixels first-reach probability to relevant scribble is determined. The mechanism inspires some improvement to existing algorithm. Experiment results indicate that introduction of blending mechanism benefits visual result to users scribbles not only in robustness to amount and position but convenience in selecting desirable colors in-time.
Neural population encoding and analysis of spike train play an important role in the field of neural inforamtion processing. In this study, a classification method of spike train is proposed based on high-order multiple Possion model, and a mathematic deduction is made in the spike instensity distribution, accuracy of matching and integration strategy, respectively. Finally, 20 trails, as a traing set, are applied to experiment of U maze of mouse. The result demonstrates that the accuracy rate of the classification method is about 97%.
Jousselme advanced the distance of evidence to measure the degree of difference between evidence that represents the identification result and evidence that represents the real solution. The distance of evidence becomes smaller with the increase of dispersion of evidences basic probability assignment. And for two totally different evidences, the distance is less than one unless the two evidences are both category ones. Therefore, the distance of evidence proposed by Jousselme is unsuitable to measure the difference between two evidences. Aiming at this problem, a modified distance of evidence is put forward. Numerical examples indicate that the modified distance of evidence extendes the application scope of distance proposed by Jousselme for measuring the difference between evidences.
An improved training algorithm is proposed to solve the conflict of objective functions between weights and quantum interval during the training process of Multilevel Activation Functions-Quantum Neural Network (MAF-QNN), which will result in the degradation of training speed and network performance. By the criterion of least mean square error, the objective functions of the weights and quantum interval are unified and trained simultaneously. And then, by introducing the Levenberg-Marquardt (LM) algorithm, the probability of which the training results fall into local minimum is reduced. Therefore, the MAF-QNN can be trained effectively and efficiently. Simulation results show that, the proposed algorithm can decrease the iteration times efficiently and improve convergence precision significantly. In this way, it can be applied to data classification, function approximation and so on, expanding the application fields of MAF-QNN.
In order for removing the drawback of the traditional visual saliency detection methods which solely used the information of current viewing image or prior knowledge, this paper proposes an information theoretic algorithm to combine the long-term features which imply the prior knowledge with short-term features which imply the information of current viewing image. Firstly, a long-term sparse dictionary and short-term sparse dictionary are trained using the eye-tracking data and current viewing image, respectively. Their corresponding sparse codes are regarded as the long-term and short-term features, respectively. Secondly, to reduce the problem of existing methods which derivated features on the entire image or a local neighborhood with the fixed size, an information entropy based the estimation method of probability distribution of features is proposed. This method can infer an optimal size of region adaptively according to the characteristics of the current viewing image for the calculation of probability of the appearance of long-tern and short-term features. Finally, the saliency map is formulated by Shannon self-information. The subjective and quantitative comparisons with 8 state-of-the-art methods on publicly available eye-tracking databases demonstrate the effectiveness of the proposed method.
For spatiogram based object tracking, suitable similarity measure is critical. In this paper, a new spatiogram similarity measure is presented. The spatial distribution of the pixels corresponding to each bin is regarded as a Gaussian distribution, where the mean vector and covariance matrix are computed with all pixels belonging to the corresponding bin. Then, the similarity of two spatial distributions is computed with the Jensen-Shannon Divergence (JSD). The similarity of color feature is calculated by using histogram intersection, which is more discriminative than Bhattacharyya coefficient. Both theoretically and experimentally, the proposed measure is stable, and gives superior discriminative power than existing methods, and achieves promising performance in tracking object from single or sequence of images.
在复杂场景下,传统混合高斯模型能较好地检测出运动目标,但随着时间的推移,模型参数收敛缓慢且难以适应场景中真实背景的实时变化,从而导致运动目标的错误检测率增加。该文利用滑动窗技术的短时历史记忆特性,提出一种新颖的基于滑动窗的混合高斯模型运动目标检测方法,该方法弥补了传统混合高斯背景模型不能及时形成新背景的缺点,提高了运动检测的完整性,并进一步降低了算法对场景光照变化的敏感性。多场景下的对比实验结果表明,该方法能更准确、完整地检测出运动目标并具有更好的环境适应性。
In the image fusion methods which are based on multiscale transform, the lowpass subbands are usually fused through a weighed sum rule, this may lead the intensity of the fused image to a balenced level, some bright or dark objects will not preserved especially when the two source images are taken from different types of sensor. This paper proposes a total variation method for the optimization of lowpass weight function, which can be built on either image domain or transform domain. Several multiscale transforms are chosen to verify the proposed method. Experimental results show that the method can obtain delicate and precise optimized weight function, and preserve the intensity of salient objects well, thus improve the visual quality of the fused image. Moreover, this method can be applied to any kind of multiscale transform based image fusion.
Mass detection in mammogram plays an important role in early breast cancer diagnosis. A novel method of mass detection in mammogram is proposed. Combined with Pulse Coupled Neural Network (PCNN) model and marker-controlled watershed method, an image slicing method based on Marker-PCNN is presented. Then the suspicious regions are extracted though the Multiple Concentric Layers (MCL) analysis. Finally, the morphological features of mass are employed to eliminate the false positive areas. The experimentation results show that the detected method is excellent and the False Positive (FP) is low. The detection correction rate reached 92.08%. Compared with the original MCL method and Morphological Component Analysis (MCA) method, the proposed method has evident advantage, especially in diagnoses of dense breast cancer.
Considering recognition of (n,1,m) convolutional code in non-cooperative signal processing, an algorithm based on maximum likelihood detection is proposed. Firstly, the linear equations of check polynomial coefficients are constructed with different lengths, and the equations are solved based on maximum likelihood detection. Then the equations of generator polynomial coefficients are constructed according to the relationship between check polynomial and generator polynomial. The generator polynomial is deduced by the equations at last. Validity of the algorithm is verified by the simulation results. Case studies are presented to illustrate that the method can recognize the (n,1,m) convolutional code in a high noisy environment.
Based on decoding algorithms available nowadays, a high efficiency decoding algorithm for nonbinary low-density parity-check codes is proposed. The algorithm takes advantage of both the layered decoding algorithm and the Min-max algorithm, which not only achieves the low complexity and the low memory requirement, but also speeds up the decoding by two times. Simulations for a (620,509) code over GF(25) show that with the same bit error rate, the maximum iteration times of the proposed algorithm required is just 45% of Zhangs algorithm presented in 2011.
An algorithm of complexity reduction of list decoding is given by using side information. Then in-depth analysis the reasons of complexity reduction and side information based on shift register sequence is provided. Compared with the method of using stable channel transmission side information, it does not use additional channel, and also not affect the decoding performance and complexity, more easy to engineering realization. Finally, the traditional ordered statistic decoding algorithm is improved by using a small amount of the most reliable bits as side information, thus the decoding list is significantly reduced.
Aimed at the discrete components affecting power spectrum sideband level, the spectral properties of the M-ary Position Phase Shift Keying (MPPSK) modulated signal is derived. The waveform optimization and full M-ary position is proposed to suppress these discrete spectral components. Two demodulators to adapt the optimized MPPSK signals are designed and simulated. The theoretical analysis and experimental results show that the discrete components are greatly suppressed with these two methods and the spectrum utilization is enhanced, the -40 dB bandwidth is tightened up by a factor of 73.72% and 43.72% respectively. 0.3~0.7 dB in signal to noise ratio at the same bit error rate is saved for waveform optimization demodulation than for full M-ary position demodulation which demonstrates the feasibility and validity of these two methods.
Selecting appropriate spectrum sensing time and data transmission time to maximize systemic transmission efficiency is a core research region in Cognitive Radio (CR), it has comprehensive application background. In this paper, a cycle transmission optimization mechanism is proposed. In the optimization algorithm, spectrum detection time, channel searching time and data transmission time are derived respectively. Numerical and simulation results show that the obtained optimal spectrum detection time, optimal channel search time and optimal data transmission time can protect primary users right and decrease data transmission delay effectively.
Cognitive relay networks can mitigate channel fading and enhance the spectrum utilization effectively. However, relay selection is a problem of great importance to be solved. According to the features of first order partial derivative of the Signal-to-Noise Ratio (SNR) at destination node, this paper defines the cooperative efficiency and designs an iterative relay selection scheme. Considering the decentralized structure of cognitive relay networks, this paper introduces virtual timer into relay nodes and proposes the distributed algorithm based on the iterative relay selection scheme. In this algorithm, the distributed multiple relay selection is implemented through the virtual timer at individual relays and the exchange of pertinent information. The asymptotic complexity analysis and numerical simulation results prove that the proposed scheme achieves near-optimal performance with low computing efforts and communication overheads.
The problem of reactive relay selection for two-way decode-and-forward relaying is investigated. First, the achievable rate region of conventional three-node scenario is characterized. Then, a closed-form expression of the outage probability is derived. Next, by using the channel state information and with the aid of traffic knowledge, a reactive relay selection criterion is proposed. Further, optimal power allocation between superposed signals at the relay is investigated. As a result, two novel policies selecting the optimal scaling coefficients are presented and analyzed in terms of outage probability. Simulation results demonstrate that the system outage performance is improved significantly, especially when the channels are asymmetric.
Doppler information of line spectrum can be used to estimate the parameters of radiated noise sources on underwater target. Because of the low frequency and low speed characteristics, the Doppler frequency shift changes weakly. Its time-frequency analysis should be limited in a selected narrow-frequency bandwidth with high resolution. Wigner-Ville Distribution (WVD) has excellent time-frequency focusing property, but the amount of calculation and storage space are increased sharply with the higher calculation resolution demanding. A fast calculating method is proposed for selected bandwidth zoom WVD analysis. The method improves the Chirp-Z Transform (CZT) algorithms, and combines that with WVD, improves greatly calculation performance of high resolution zoom WVD analysis. Computer simulation and experimental result at sea demonstrate the validity of the presented method.
In order to detect acoustic transient signal at low SNR, a double noise reduction method of sub-wave detection and partial instantaneous energy density level is proposed. A certain order of Intrinsic Mode Function (IMF) containing useful signal is selected adaptively. Then Hilbert energy spectrum is calculated and integrated partially to filter out the residual out-off-band noise. In addition, an acoustic transient signal partial instantaneous energy density detector is established combined with width-envelope detection tactics. Lake trial data processing results show that the partial instantaneous energy density detector can reduce noise effectively and extract transient signal at low SNR. It has better performance than the traditional energy detector.
Due to the merits of lower bandwidth consumption at streaming server and higher scalability, P2P streaming systems are widely adopted and deployed. However, the heterogeneity of bandwidth resource and playback position at peers may easily lead to load unbalancing problem. This may severely deteriorate video playback quality at peers. This paper focuses on the issue of bandwidth request allocation, aiming at substantially balancing the load at the peers in P2P streaming network. The problem of contending service from multiple neighboring peers is modeled as a non-cooperative game, and the optimal bandwidth request allocation policy, called Game based Bandwidth Request Allocation (GBRA), is obtained through searching the Nash equilibrium of this game. Numerical results show that the proposed policy can effectively improve the load balancing of the P2P streaming networks and decrease the latency of streaming data retrieval at peers when compared with the classical bandwidth request allocation policies.
Most current botnet detection approaches are based on analyzing network traffic and they usually rely on malicious behaviors of bots or need information provided by external systems. Besides, the huge computation of traditional approaches is difficult to meet the real time requirement. So an online botnet detection approach is proposed based on MapReduce. The approach detects botnet by analyzing network traffic and extracting intra relationship of flows. The data analysis is carried out in cloud platform which makes the data capture and data analysis working simultaneously and realizes online detection. The experimental results show that the detection rate of the approach can achieve 90% and the false positive rate is below 5%. When the data is large, the speedup is close to linear. It proves the feasibility of applying cloud computing technologies to botnet detection.
In order to describe the users access behavior dynamically, efficiently and accurately, a novel detection model for Application-layer Distributed Denial of Service (App-DDoS) attack based on maximal frequent sequential pattern mining is proposed, named App-DDoS Detection Algorithm based on Maximal Frequent Sequential Pattern mining (ADA_MFSP). After mining maximal frequent sequential patterns of trained and detected Web Access Sequence Database (WASD), the model introduces sequence alignment, view time and request circulation abnormality to describe the behaviour of App-DDoS attacks, finally achieves the purpose of attack detection. It is proved with experiments that the ADA_MFSP model can not only detect kinds of App-DDoS attacks, but also has good detection sensitivity.
Software Defined Network (SDN) becomes a hot topic in recent years, in which flow update is one of important issues. In this paper, a classification based consistent flow update scheme is proposed, which has the advantage of strong generality and efficiently lightening working load on controller. The logic synthesis is also used to demonstrate how the scheme maintains consistency in the whole updating process. Simulation results in multiple scenarios show that the proposed scheme achieves better generality than existing schemes in related work. In addition, the scheme reduces significantly working load on controller while only costs nearly the same time in the update process as compared schemes.
Virtual networking mapping aims at minimal resource consumption or shortest path at substrate network, but ignores the resource demanded by the hidden hops, making bottlenecks due to the resource shortage at the hidden hops, and finally influences the performance of the entire substrate network and the request success rate of the consequential virtual network. Pointing at above problem and taking into account the resource demanded by the hidden hops, this paper aims at the simultaneous loading balance of the substrate node and the substrate link, mathematically formulate the virtual networking mapping problem constrained by hops, and solves it using multi-objective Load-Balancing Particle Swarm Optimization (LB-PSO) algorithm. This algorithm eliminates resource bottleneck efficiently, provides a more balanced substrate network for the request of the consequential virtual network request, thus improving the constructing success rate of virtual network, the availability of network resources and the profits of the infrastructure providers.
Research on modeling and analyzing adaptive business processes based on Web services is very important for developing and deploying applications in the mobile Internet. In order to modeling and verifying the reliability and adaptability of services effectively, in this paper, a probabilistic approach is proposed to formally describe and analyze the reliability properties of adaptive business processes. First, based on the semantic similarity of services and compatibility of data types, which provide candidate set for the adaptable application, and business processes can be adaptive according to the change of circumstance. Then, the probabilistic model checking is used to analyse the soundness and reliability of adaptive business processes. Finally, an video transfer application is modeled and verified based on the proposed method, which shows that the approach provides an effective underlying guideline for modeling and analyzing adaptive applications in the mobile Internet.
To improve the security on accessing outsourced data in cloud computing, the established tree-based key management scheme, which is suitable for the owner-write-users-read/write scenario, is perfected. The new scheme takes full advantage of a hardware chip called Trusted Platform Module (TPM) to deal with malicious users in the scenario. It solves some troubles caused by session keys, other keys for encrypting or decrypting data blocks in the cloud and changes of user access rights. Moreover, these problems, such as ensuring an authentic user and securing his or her computer environment, are also considered. Meantime, the unsafe fact that the original scheme is vulnerable for type and replay attacks is discovered, and the fixed methods are also designed. Finally, the new scheme is modeled using the appliedcalculus, and the safety of the data access procedure is analyzed using the automated reasoning tool named ProVerif. Results indicate that the scheme extended is more practical and safe than the original.
This paper proposes an improved method for Winograd algorithm to solve the problem that the existing methods of long sequences Fast Fourier Transform (FFT) on the TS201 processor does not take full account of the Caches miss influence on efficiency. The new method makes maximum use of the Caches advantages in reading and writing by optimizing the access method of rows and columns to avoid three explicitly matrix transposition, and hiding the twiddle factor multiplication by reconfiguration butterfly computation. Test results show that the performance of Cache-optimized implementation of FFT is significantly improved, and it can be used for fast acquisition of pulse-compression in radar system.