Yang Jing-Jing, Su Xiao-Hong, Ma Pei-Jun. Combination of Multiple Spatial-temporal Slices for Fast Human Tracking[J]. Journal of Electronics & Information Technology, 2012, 34(10): 2382-2388. doi: 10.3724/SP.J.1146.2011.01313
Citation:
Yang Jing-Jing, Su Xiao-Hong, Ma Pei-Jun. Combination of Multiple Spatial-temporal Slices for Fast Human Tracking[J]. Journal of Electronics & Information Technology, 2012, 34(10): 2382-2388. doi: 10.3724/SP.J.1146.2011.01313
Yang Jing-Jing, Su Xiao-Hong, Ma Pei-Jun. Combination of Multiple Spatial-temporal Slices for Fast Human Tracking[J]. Journal of Electronics & Information Technology, 2012, 34(10): 2382-2388. doi: 10.3724/SP.J.1146.2011.01313
Citation:
Yang Jing-Jing, Su Xiao-Hong, Ma Pei-Jun. Combination of Multiple Spatial-temporal Slices for Fast Human Tracking[J]. Journal of Electronics & Information Technology, 2012, 34(10): 2382-2388. doi: 10.3724/SP.J.1146.2011.01313
Considering the high computational cost and complex object representation problems in human tracking, this paper presents a model-free tracking approach using a combination of multiple spatial-temporal slices. The human is represented with a variable number of components in different spatial-temporal slice images. The component initialization requires no pre-defined object part model. By introducing the spatial-temporal slice method, the original image sequence volume is divided into multiple horizontal spatial-temporal slice images. In each slice image, candidate components are detected and tracked across frames. A combination scheme is proposed to assemble these components into different human objects based on their motion and position consistence. Thus, the traditional human tracking issue in the XYT 3D space is transformed into a combined component tracking issue in the XT 2D space. Experiments show that the proposed method reduces the trajectory errors, is real-time computational efficient and robust to human component missing.