Email alert
2004 Vol. 26, No. 11
Display Method:
2004, 26(11): 1681-1685.
Abstract:
To evaluate and calculate the capacity of information hiding system is a crucial topic of the field. From the point of view of the coding theory and set theory, this paper suggest a methodology for calculating the capacity of information hiding system on still images in a discrete case, meantime, some definitions are given and a conclusion that the capacity in the condition of no attackers is only related to the distortion function and the PMF(Probability Mass Function) of the source data is drawn out. Furthermore, the results of the capacity of information hiding in the case with attackers are given also.
To evaluate and calculate the capacity of information hiding system is a crucial topic of the field. From the point of view of the coding theory and set theory, this paper suggest a methodology for calculating the capacity of information hiding system on still images in a discrete case, meantime, some definitions are given and a conclusion that the capacity in the condition of no attackers is only related to the distortion function and the PMF(Probability Mass Function) of the source data is drawn out. Furthermore, the results of the capacity of information hiding in the case with attackers are given also.
2004, 26(11): 1686-1692.
Abstract:
A nonlinear multiscalc pyramidal decomposition based on median transform is presented in this paper at first. Then it gives a denoising algorithm which can restore the image distorted by impidse noise and Gaussian noise. The coefficients of the image via the median pyramidal transform represent different characteristics, so the transform can effectively detach noise from image. The different noise coefficients suppression means can be adopted to remove different noise. The simulation result indicates the method is effective, and superior over other methods.
A nonlinear multiscalc pyramidal decomposition based on median transform is presented in this paper at first. Then it gives a denoising algorithm which can restore the image distorted by impidse noise and Gaussian noise. The coefficients of the image via the median pyramidal transform represent different characteristics, so the transform can effectively detach noise from image. The different noise coefficients suppression means can be adopted to remove different noise. The simulation result indicates the method is effective, and superior over other methods.
2004, 26(11): 1693-1699.
Abstract:
An algorithm of dominant color extraction in DGT domain based on MPEG7 is described in this paper, which proposes an efficient and fast method to extract the dominant color feature from the compressed bit streams. As a part of the algorithm, a method for automatic determination of threshold is proposed too. The proposed algorithm is based on JPEG standard and some statistical parameters of DCT coefficients. Experimental results of comparison show that the proposed algorithm is effective and fast in dominant color extraction. A target application is similar retrieval in compressed image database or Internet via this proposed algorithm.
An algorithm of dominant color extraction in DGT domain based on MPEG7 is described in this paper, which proposes an efficient and fast method to extract the dominant color feature from the compressed bit streams. As a part of the algorithm, a method for automatic determination of threshold is proposed too. The proposed algorithm is based on JPEG standard and some statistical parameters of DCT coefficients. Experimental results of comparison show that the proposed algorithm is effective and fast in dominant color extraction. A target application is similar retrieval in compressed image database or Internet via this proposed algorithm.
2004, 26(11): 1700-1705.
Abstract:
Aimed at the difficult problem for detecting distant small target with very low SNR, the proposed method of target detection based on multi-sensor multi-level information fusion consists of two parts: The feature level fusion and the decision fusion. On the phase of feature level fusion, first extract all feature images of dual band IR images; then fuse these feature images by using the adaptive weighting method to get the confidence images of target decision; finally scan the confidence images by using the maximum confidence value rule to get result of target detection for all levels. On the phase of decision fusion, fuse the result of target detection for all levels by using the combination logic to get target detection output for the system. The result shows the effectiveness.
Aimed at the difficult problem for detecting distant small target with very low SNR, the proposed method of target detection based on multi-sensor multi-level information fusion consists of two parts: The feature level fusion and the decision fusion. On the phase of feature level fusion, first extract all feature images of dual band IR images; then fuse these feature images by using the adaptive weighting method to get the confidence images of target decision; finally scan the confidence images by using the maximum confidence value rule to get result of target detection for all levels. On the phase of decision fusion, fuse the result of target detection for all levels by using the combination logic to get target detection output for the system. The result shows the effectiveness.
2004, 26(11): 1706-1713.
Abstract:
This paper reviews several typical techniques of multi-modal image registra-tion, exposures the relationship among them, and offers a full analysis and comparison using numerical examples. The techniques analyzed in this paper include the ones using those reg-istration measures such as entropy based, PIU (Partition Intensity Uniformity) classes, and correlation coefficient based. The modified PIU criteria are also included. The experimental results demonstrate the effectiveness and accommodation of the techniques, and provide a valid reference in the choice of registration techniques.
This paper reviews several typical techniques of multi-modal image registra-tion, exposures the relationship among them, and offers a full analysis and comparison using numerical examples. The techniques analyzed in this paper include the ones using those reg-istration measures such as entropy based, PIU (Partition Intensity Uniformity) classes, and correlation coefficient based. The modified PIU criteria are also included. The experimental results demonstrate the effectiveness and accommodation of the techniques, and provide a valid reference in the choice of registration techniques.
2004, 26(11): 1714-1720.
Abstract:
Automatic speech recognition in telecommunications environment still has a lower correct rate compared to its desktop pairs. Improving the performance of telephone-quality speech recognition is an urgent problem for its application in those practical fields. Previous works have shown that the main reason for this performance degradation is the varational mismatch caused by different telephone channels between the testing and train-ing sets. In this paper, they propose an efficient implementation to dynamically compen-sate this mismatch based on a phone-conditioned prior statistic model for the channel bias. This algorithm uses Bayes rule to estimate telephone channels and dynamically follows the time-variations within the channels. In their experiments on mandarin Large Vocabulary Continuous Speech Recognition (LVCSR) over telephone lines, the average Character Error Rate (CER) decreases more than 27% when applying this algorithm; in short utterance test, the Vord-Error-Rate(VER) relatively reduced 30%. At the same time, the structural delay and computational consumptions required by this algorithm are limited. The average delay is about 200 ins. So it could be embedded into practical telephone-based applications.
Automatic speech recognition in telecommunications environment still has a lower correct rate compared to its desktop pairs. Improving the performance of telephone-quality speech recognition is an urgent problem for its application in those practical fields. Previous works have shown that the main reason for this performance degradation is the varational mismatch caused by different telephone channels between the testing and train-ing sets. In this paper, they propose an efficient implementation to dynamically compen-sate this mismatch based on a phone-conditioned prior statistic model for the channel bias. This algorithm uses Bayes rule to estimate telephone channels and dynamically follows the time-variations within the channels. In their experiments on mandarin Large Vocabulary Continuous Speech Recognition (LVCSR) over telephone lines, the average Character Error Rate (CER) decreases more than 27% when applying this algorithm; in short utterance test, the Vord-Error-Rate(VER) relatively reduced 30%. At the same time, the structural delay and computational consumptions required by this algorithm are limited. The average delay is about 200 ins. So it could be embedded into practical telephone-based applications.
2004, 26(11): 1721-1727.
Abstract:
This paper mainly studies how to extract the common structure and process methods of expectation from scene-specified expectation setting, then puts forbh the algo-rithms to build expectation model suitable for dialogue systems based on the task model. When incorporated with dialog context, this model could create dynamic situation that varies with, dialogue process, which endows the system with the preliminary ability to reason users intentions by reference to this situation, so as to improve the robustness and precision of semantic analysis and the dialogue success rate. Finally, the experimental results prove the effectiveness and usability of this model.
This paper mainly studies how to extract the common structure and process methods of expectation from scene-specified expectation setting, then puts forbh the algo-rithms to build expectation model suitable for dialogue systems based on the task model. When incorporated with dialog context, this model could create dynamic situation that varies with, dialogue process, which endows the system with the preliminary ability to reason users intentions by reference to this situation, so as to improve the robustness and precision of semantic analysis and the dialogue success rate. Finally, the experimental results prove the effectiveness and usability of this model.
2004, 26(11): 1728-1732.
Abstract:
This paper presents the construction of compactly supported, interpolating or-thogonal multiwavelet based on wavelet sampling theorem. With the new interpolation orthogonal multiwavelet base, wavelet coefficients in the multiresolution representation can be directly obtained from a sampled signal. Thus the initialization of the discrete wavelet transform (prefiltering) can be simplified to the identity operator.
This paper presents the construction of compactly supported, interpolating or-thogonal multiwavelet based on wavelet sampling theorem. With the new interpolation orthogonal multiwavelet base, wavelet coefficients in the multiresolution representation can be directly obtained from a sampled signal. Thus the initialization of the discrete wavelet transform (prefiltering) can be simplified to the identity operator.
2004, 26(11): 1733-1737.
Abstract:
noise, which is added to the received signals are analyzed. And the idea of applying median filtering algorithm to nonlinear frequency estimation is proposed. At the same time, Monte Carlo simulation is used to validate the idea. The results show that the Mean Square Error (MES) of frequency estimation using median filter is more perfect than using mean value filter.
noise, which is added to the received signals are analyzed. And the idea of applying median filtering algorithm to nonlinear frequency estimation is proposed. At the same time, Monte Carlo simulation is used to validate the idea. The results show that the Mean Square Error (MES) of frequency estimation using median filter is more perfect than using mean value filter.
2004, 26(11): 1738-1745.
Abstract:
In this paper, initial assumption of SAR pixel distribution is derived from H/a classifier. Then a Maximum Likelihood (ML) method is introduced to improve the classifi-cation.. The backscattering properties of a natural medium, that varies with the observation frequency, dual-frequency SAR images are combined to get further improved classification. Speckle in SAR images will disturb classification accuracy. Vector filter of speckle is used to dual-frequency images before classification. Experiments are done on data got by NASA/JPL lab near Tien Mountains, and pseudo-colored classification results of both single and dual frequency POLSAR image are submitted. Results show that filtered dual-frequency fully polarimetric SAR data obtain the best classification result.
In this paper, initial assumption of SAR pixel distribution is derived from H/a classifier. Then a Maximum Likelihood (ML) method is introduced to improve the classifi-cation.. The backscattering properties of a natural medium, that varies with the observation frequency, dual-frequency SAR images are combined to get further improved classification. Speckle in SAR images will disturb classification accuracy. Vector filter of speckle is used to dual-frequency images before classification. Experiments are done on data got by NASA/JPL lab near Tien Mountains, and pseudo-colored classification results of both single and dual frequency POLSAR image are submitted. Results show that filtered dual-frequency fully polarimetric SAR data obtain the best classification result.
2004, 26(11): 1746-1751.
Abstract:
On the basis of the space geometry relation of .space-borne Synthetic AportiuC Radar(SAR), its Doppler parameters formulation is built in this paper. Taking fclio earth rotation and oblateness into consideration, the orbit perturbation, attitude errors and its stability effects on SAR Doppler parameters are analyzed. The simulation result; .shows that orbit disturbance is the key factor to Doppler error, and the enhancement of attitude stabilization can lead to the decline of Doppler error as well as its variability. The conclusion is valuable for the determination and optimization of the system parameters.
On the basis of the space geometry relation of .space-borne Synthetic AportiuC Radar(SAR), its Doppler parameters formulation is built in this paper. Taking fclio earth rotation and oblateness into consideration, the orbit perturbation, attitude errors and its stability effects on SAR Doppler parameters are analyzed. The simulation result; .shows that orbit disturbance is the key factor to Doppler error, and the enhancement of attitude stabilization can lead to the decline of Doppler error as well as its variability. The conclusion is valuable for the determination and optimization of the system parameters.
2004, 26(11): 1752-1757.
Abstract:
Velocity and acceleration of target are regard as these of the most important features in real and artificial warhead recognition for ballistic missile defence phased array radar, and also are used as the necessary information for velocity compensation of one-dimensional range profile in wideband radar. Based on the property of low data rate of phased array radar, this paper proposes a digital method to achieve fine-line velocity tracking, and the performances of pulse correlation and channel compensation, two methods being useful for acceleration estimation in target acquisition, are analysed. A new way to solve velocity ambiguity is also proposed. The simulation proves that all the methods are correct and efficient.
Velocity and acceleration of target are regard as these of the most important features in real and artificial warhead recognition for ballistic missile defence phased array radar, and also are used as the necessary information for velocity compensation of one-dimensional range profile in wideband radar. Based on the property of low data rate of phased array radar, this paper proposes a digital method to achieve fine-line velocity tracking, and the performances of pulse correlation and channel compensation, two methods being useful for acceleration estimation in target acquisition, are analysed. A new way to solve velocity ambiguity is also proposed. The simulation proves that all the methods are correct and efficient.
2004, 26(11): 1758-1765.
Abstract:
Three-channel adaptive biorthogonal filterbanks via lifting scheme aro investi-gated. Using the subband coding gain as the criterion in design, starting from an arbitrary filterbank (orthonormal or biorthogonal), three-channel biorthogonal filterbanks arc designed by the sequential adaptive lifting scheme. Finally, experimental results show that the sub-band coding gains can be improved when an appropriate initial filterbank is chosen.
Three-channel adaptive biorthogonal filterbanks via lifting scheme aro investi-gated. Using the subband coding gain as the criterion in design, starting from an arbitrary filterbank (orthonormal or biorthogonal), three-channel biorthogonal filterbanks arc designed by the sequential adaptive lifting scheme. Finally, experimental results show that the sub-band coding gains can be improved when an appropriate initial filterbank is chosen.
2004, 26(11): 1766-1770.
Abstract:
This paper discusses the mirror image interference in direct digital waveform synthesis. The formation and distribution characteristics of mirror images are illuminated, and the impact of sampling rate which brings mirror image interference is analyzed. Based on these, the condition of sampling rate which results in mirror image interference is concluded and proved by experiment. The conclusion provides new theoretical basis on sampling rate selection in digital waveform synthesis.
This paper discusses the mirror image interference in direct digital waveform synthesis. The formation and distribution characteristics of mirror images are illuminated, and the impact of sampling rate which brings mirror image interference is analyzed. Based on these, the condition of sampling rate which results in mirror image interference is concluded and proved by experiment. The conclusion provides new theoretical basis on sampling rate selection in digital waveform synthesis.
2004, 26(11): 1771-1777.
Abstract:
For the received signals were sampled at a fixed rate that is asynchronous with the data clock in software radio, we implement symbol synchronization during the process of decimation which replace general methods by using an interpolator in the baseband stage, and Maximum Likelihood(ML) timing synchronization techniques can be incorporated into the filters polyphase decomposition in a natural way. In order to match the timing delay of received signals to the data timing delay, the timing delay structure was adjusted by utilizing the timing error. Cascaded Integrator Comb (CIC) filters and HalfBand Filters (HDF) served as decimation filters that reduce the system complexity and computational burden to a certain extent. The numerical simulation results show that the value of timing delay structure will be converged to a constant with small fluctuation when the received signals timing delay matches the data timing delay, and the output from the last stage of FIR is the optimal sampled signals.
For the received signals were sampled at a fixed rate that is asynchronous with the data clock in software radio, we implement symbol synchronization during the process of decimation which replace general methods by using an interpolator in the baseband stage, and Maximum Likelihood(ML) timing synchronization techniques can be incorporated into the filters polyphase decomposition in a natural way. In order to match the timing delay of received signals to the data timing delay, the timing delay structure was adjusted by utilizing the timing error. Cascaded Integrator Comb (CIC) filters and HalfBand Filters (HDF) served as decimation filters that reduce the system complexity and computational burden to a certain extent. The numerical simulation results show that the value of timing delay structure will be converged to a constant with small fluctuation when the received signals timing delay matches the data timing delay, and the output from the last stage of FIR is the optimal sampled signals.
2004, 26(11): 1778-1782.
Abstract:
In this paper, an algebraic method for the construction of regular Low Density Parity Check (LDPC) codes without short cycles is proposed. By this method, the regular LDPC codes with 8-girth can be constructed. Simulation results show that these codes can achieve better performance than randomly constructed regular LDPC codes over AWGN channels.
In this paper, an algebraic method for the construction of regular Low Density Parity Check (LDPC) codes without short cycles is proposed. By this method, the regular LDPC codes with 8-girth can be constructed. Simulation results show that these codes can achieve better performance than randomly constructed regular LDPC codes over AWGN channels.
2004, 26(11): 1783-1786.
Abstract:
An initial reconstruction algorithm is given for the generalized self-shrinking sequences using the ideas of the guessing attack. The result shows that: (1) when both the characteristic polynomial of the Linear Feedback Shift Register (LFSR) and the linear combiner are known, the algorithm ensures the cryptanalysis with complexity O((L/2)32L-2)),lL/2; (2) when the linear combiner is unknown, the algorithm ensures the cryptanalysis with complexity O(L322L-1),lL; (3) When the characteristic polynomial of the LFSR is unknown, the algorithm ensures the cryptanalysis with complexity O((2L-1)L-122L-l),lL. Here L is the length of the LFSR.
An initial reconstruction algorithm is given for the generalized self-shrinking sequences using the ideas of the guessing attack. The result shows that: (1) when both the characteristic polynomial of the Linear Feedback Shift Register (LFSR) and the linear combiner are known, the algorithm ensures the cryptanalysis with complexity O((L/2)32L-2)),lL/2; (2) when the linear combiner is unknown, the algorithm ensures the cryptanalysis with complexity O(L322L-1),lL; (3) When the characteristic polynomial of the LFSR is unknown, the algorithm ensures the cryptanalysis with complexity O((2L-1)L-122L-l),lL. Here L is the length of the LFSR.
2004, 26(11): 1787-1791.
Abstract:
Cryptographically strong sequences not only should have a large linear com-plexity, but also no a significant decrease of the linear complexity when a few terms are changed. This requirement leads to the concept of the /c-error linear complexity of periodic sequences. In the following two cases: (1) gcd(N,p) = 1; (2) N= pv, where p denotes the characteristic of the finite field GF(q), the counting function NN,o(c), i.e., the number of N-periodic sequences with given linear complexity c, is showed, the expected value of the linear complexity En,o is determined, and a useful lower bound on the expected value of the k-errov linear complexity EN, is established.
Cryptographically strong sequences not only should have a large linear com-plexity, but also no a significant decrease of the linear complexity when a few terms are changed. This requirement leads to the concept of the /c-error linear complexity of periodic sequences. In the following two cases: (1) gcd(N,p) = 1; (2) N= pv, where p denotes the characteristic of the finite field GF(q), the counting function NN,o(c), i.e., the number of N-periodic sequences with given linear complexity c, is showed, the expected value of the linear complexity En,o is determined, and a useful lower bound on the expected value of the k-errov linear complexity EN, is established.
2004, 26(11): 1792-1798.
Abstract:
This paper discusses the routing and handover issue hi the NGSO rosette satel-lite constellations equipped with Inter-Satellite Links (ISL). Based on the variation regular-ity of the rosette constellation topology, a novel routing strategy named circularly refreshing routing strategy is proposed. Then, by appropriately combining routing and handover pro-cedure, the Minimal-Hop Handover (MHH) strategy is proposed. Compared with other previous strategies, the simulation results show that MHH strategy performs better with less propagation delay and lower handover frequency, so it has a good practical value.
This paper discusses the routing and handover issue hi the NGSO rosette satel-lite constellations equipped with Inter-Satellite Links (ISL). Based on the variation regular-ity of the rosette constellation topology, a novel routing strategy named circularly refreshing routing strategy is proposed. Then, by appropriately combining routing and handover pro-cedure, the Minimal-Hop Handover (MHH) strategy is proposed. Compared with other previous strategies, the simulation results show that MHH strategy performs better with less propagation delay and lower handover frequency, so it has a good practical value.
2004, 26(11): 1799-1804.
Abstract:
In this paper, a space-time coded MIMO (Multiple-Input Multiple-Output) OFDM (Orthogonal Frequency Diversion Multiplexing) system using frequency spread cod-ing is proposed based on the fact that the fading effects of the transmitted signals on each sub-carrier are different owing to the frequency- selective fading property. The simulation re-sults indicate that the proposed scheme not only effectively exploits the inherent frequency diversity of the frequency-selective channel and maintains the spectrum efficiency of sys-tem, but also has a low complexity and is easy to realize compared to other MIMO OFDM systems.
In this paper, a space-time coded MIMO (Multiple-Input Multiple-Output) OFDM (Orthogonal Frequency Diversion Multiplexing) system using frequency spread cod-ing is proposed based on the fact that the fading effects of the transmitted signals on each sub-carrier are different owing to the frequency- selective fading property. The simulation re-sults indicate that the proposed scheme not only effectively exploits the inherent frequency diversity of the frequency-selective channel and maintains the spectrum efficiency of sys-tem, but also has a low complexity and is easy to realize compared to other MIMO OFDM systems.
2004, 26(11): 1805-1811.
Abstract:
The tradeoff between performance and complexity must he taken into account in channels estimation algorithm of wireless OFDM systems. This paper proposes a new algorithm for channel estimation in wireless OFDM systems, which aims at reducing the conventional algorithm complexity by using Wiener filtering combined with interpolation filtering. The analysis and simulation results show that the novel algorithm reduces the complexity remarkably with performance slightly degraded.
The tradeoff between performance and complexity must he taken into account in channels estimation algorithm of wireless OFDM systems. This paper proposes a new algorithm for channel estimation in wireless OFDM systems, which aims at reducing the conventional algorithm complexity by using Wiener filtering combined with interpolation filtering. The analysis and simulation results show that the novel algorithm reduces the complexity remarkably with performance slightly degraded.
2004, 26(11): 1812-1818.
Abstract:
This paper proposes two new switch architectures that are more flexible and cost effective than the methods introduced in the literatures. For a given packet loss probability requirement, the resiilts demonstrate that (1) for non-bursty traffic, non-degenerate form of Fiber Delay-Lines (FDLs) is the most cost effective solution; (2) for bursty traffic, a combined use of degenerate FDLs and Tunable Wavelength Converters (TWCs) is a cost effective and robust solution. With the increase of the average burst length, i.e., burstiness, the number of TWCs needs to be increased so as to maintain a reasonable packet loss probability. However, even for the traffic with high degree of burstiness, the architecture that employs a set of degenerate FDLs and TWCs shared among the input lines, is still a cost effective and robust solution.
This paper proposes two new switch architectures that are more flexible and cost effective than the methods introduced in the literatures. For a given packet loss probability requirement, the resiilts demonstrate that (1) for non-bursty traffic, non-degenerate form of Fiber Delay-Lines (FDLs) is the most cost effective solution; (2) for bursty traffic, a combined use of degenerate FDLs and Tunable Wavelength Converters (TWCs) is a cost effective and robust solution. With the increase of the average burst length, i.e., burstiness, the number of TWCs needs to be increased so as to maintain a reasonable packet loss probability. However, even for the traffic with high degree of burstiness, the architecture that employs a set of degenerate FDLs and TWCs shared among the input lines, is still a cost effective and robust solution.
2004, 26(11): 1819-1824.
Abstract:
In this paper, A kind of shared restoration mechanism of optical network based on signaling control is discussed in detail, and the implementation scheme using the exist-ing protocol is presented. It is proved by experimentation that the shared mechanism can increase greatly the resource utility, and decrease efficiently the restoration time. The mech-anism meet the request of the convergence of IP and optical network, could be applied to the ASON for efficient restoration.
In this paper, A kind of shared restoration mechanism of optical network based on signaling control is discussed in detail, and the implementation scheme using the exist-ing protocol is presented. It is proved by experimentation that the shared mechanism can increase greatly the resource utility, and decrease efficiently the restoration time. The mech-anism meet the request of the convergence of IP and optical network, could be applied to the ASON for efficient restoration.
2004, 26(11): 1825-1829.
Abstract:
An algorithm based on Clonal Strategies(CS) is presented to deal with the delay-constrained and least-cost multicast routing problem known as NP-complet. Simulations show that compared with those based on genetic algorithm, the multicast routing based on CS has faster converging speed and better ability of global searching with the property of stabilization, agility and operating simply.
An algorithm based on Clonal Strategies(CS) is presented to deal with the delay-constrained and least-cost multicast routing problem known as NP-complet. Simulations show that compared with those based on genetic algorithm, the multicast routing based on CS has faster converging speed and better ability of global searching with the property of stabilization, agility and operating simply.
2004, 26(11): 1830-1836.
Abstract:
CORBA has provided a set of Common Object Services (COS), which help users to build large-scale distributed applications, but Common Object Services Specifications (COSS) do not include integrated formal description. Petri nets are a powerful instrument for modeling, analyzing, and simulating dynamic systems with concurrent and non-deterministic behavior. An extended colored Petri net is introduced to express the behaviors of individual objects, the concurrency between different objects as well as the intra-object concurrency in the context of CORBA, and gives an example for formal description of CORBA objects.
CORBA has provided a set of Common Object Services (COS), which help users to build large-scale distributed applications, but Common Object Services Specifications (COSS) do not include integrated formal description. Petri nets are a powerful instrument for modeling, analyzing, and simulating dynamic systems with concurrent and non-deterministic behavior. An extended colored Petri net is introduced to express the behaviors of individual objects, the concurrency between different objects as well as the intra-object concurrency in the context of CORBA, and gives an example for formal description of CORBA objects.
2004, 26(11): 1837-1842.
Abstract:
To accurately estimate multi-section traffic states of freeway, a new multi-section error state equations are built up. In the new model, error propagations of both the traffic density and the average velocity are considered and the extended Kalman filter is used to estimate all traffic states. To avoid a series of high-dimension problems, a modified weighted Gram-Schmidt orthogonal U-D factorization method is used for the time update and measurement update of the extended Kalman filter to get high numerical stability and computational efficiency. Considered the structure of system matrix, block matrix is used in U-D factorization algorithm. Results of simulation to 100 section states of freeway and actual application show that the new method can be efficiently used to estimate and predict freeway traffic flows.
To accurately estimate multi-section traffic states of freeway, a new multi-section error state equations are built up. In the new model, error propagations of both the traffic density and the average velocity are considered and the extended Kalman filter is used to estimate all traffic states. To avoid a series of high-dimension problems, a modified weighted Gram-Schmidt orthogonal U-D factorization method is used for the time update and measurement update of the extended Kalman filter to get high numerical stability and computational efficiency. Considered the structure of system matrix, block matrix is used in U-D factorization algorithm. Results of simulation to 100 section states of freeway and actual application show that the new method can be efficiently used to estimate and predict freeway traffic flows.
2004, 26(11): 1843-1848.
Abstract:
Based on code division multiple access and amicable orthogonal design, a non-coherent space-time transmission scheme is proposed for multiple-antenna systems, which allows full-diversity communication and is resistant to multiuser interference. Then a differ-ential decorrelative receiver is given for flat Rayleigh fading channels, which decouples not only the detection of different users but also the decoding of different data symbols.
Based on code division multiple access and amicable orthogonal design, a non-coherent space-time transmission scheme is proposed for multiple-antenna systems, which allows full-diversity communication and is resistant to multiuser interference. Then a differ-ential decorrelative receiver is given for flat Rayleigh fading channels, which decouples not only the detection of different users but also the decoding of different data symbols.