Zuo Jun-yi, Liang Yan, Zhao Chun-hui, Pan Quan, Cheng Yong-mei, Zhang Hong-cai. Gaussian Mixture Background Model Based on Entropy Image and Membership-Degree-Image[J]. Journal of Electronics & Information Technology, 2008, 30(8): 1918-1922. doi: 10.3724/SP.J.1146.2007.00049
Citation:
Zuo Jun-yi, Liang Yan, Zhao Chun-hui, Pan Quan, Cheng Yong-mei, Zhang Hong-cai. Gaussian Mixture Background Model Based on Entropy Image and Membership-Degree-Image[J]. Journal of Electronics & Information Technology, 2008, 30(8): 1918-1922. doi: 10.3724/SP.J.1146.2007.00049
Zuo Jun-yi, Liang Yan, Zhao Chun-hui, Pan Quan, Cheng Yong-mei, Zhang Hong-cai. Gaussian Mixture Background Model Based on Entropy Image and Membership-Degree-Image[J]. Journal of Electronics & Information Technology, 2008, 30(8): 1918-1922. doi: 10.3724/SP.J.1146.2007.00049
Citation:
Zuo Jun-yi, Liang Yan, Zhao Chun-hui, Pan Quan, Cheng Yong-mei, Zhang Hong-cai. Gaussian Mixture Background Model Based on Entropy Image and Membership-Degree-Image[J]. Journal of Electronics & Information Technology, 2008, 30(8): 1918-1922. doi: 10.3724/SP.J.1146.2007.00049
The number of Gaussian component is fixed and correlativity of class label between adjacent pixels is not considered in classical Gaussian mixture background model. As an improved version of the model, the main contribution of this paper is twofold. The first is to construct entropy image to measure the complexity of pixels intensity distribution, and further present the adaptation mechanism for automatically choosing the component number of Gaussian mixture model for each pixel according to entropy image so that the computational cost can be reduced without significantly sacrificing detection accuracy. The other is to use the membership degree to measure the degree that one pixel belongs to the background, and further fusion the local information within its adjacent region for effective pixel classification so that the classification decision becomes more reliable without significantly increasing the computation load. Experiments conducted on various real scenes demonstrate the good performance in computational speed and accuracy.
Friedman N and Russell S. Image segmentation in videosequences: probabilistic approach. Proceeding of ThirteenthConference on Uncertainty in Artificial intelligence,Providence, Rhode Island, USA, August, 1997: 175-182.[2]Wern C R, Azarbaycjani A, and Darrell T. Pfinder : Realtime tracking of human body[J].IEEE Trans. on PatternAnalysis and Machine Intelligence.1997, 19(7):780-785[3]Stauffer C and Grimson W. Learning patterns of activityusing real-time tracking[J].IEEE Trans. on Pattern Analysisand Machine Intelligence.2000, 22(8):747-757[4]Han B, Comaniciu D, and Davis L. Sequential kernel densityapproximation through mode propagation: applications tobackground modeling. Proceeding of Asian ConferenceComputer Vision, Jeju Island, Korea, 2004.[5]Zivkovic Z. Improved adaptive Gaussian mixture model forbackground subtraction. Proceeding of the 17th InternationalConference on Pattern Recognition, Cambridge UK, August,2004: 28-31.[6]Zivkovic Z and Heijden F. Recursive unsupervised learning offinite mixture models. IEEE Trans. on Pattern Analysis andMachine Intelligence. 2004, 26(5): 651-656.[7]Sheikh Y and Shah M. Bayesian modeling of dynamic scenesfor object detection[J].IEEE Trans. on Pattern Analysis andMachine Intelligence.2005, 27(11):1778-1792[8]Parag T, Elgammal A, and Mittal A. A framework for featureselection for background subtraction. IEEE Conference onComputer Vision and Pattern Recognition, New York, 2006:1916-1923.[9]Power P W and Schoonees J A. Understanding backgroundmixture models for foreground segmentation. Proceedings ofImage and Vision Computing, New Zealand, 2002: 267-271.[10]Porikli F. Achieving real-time object detection and trackingunder extreme conditions[J].Journal of Real-Time ImageProcessing.2006, 1(1):33-40