Integrating Region and Boundary Information for Improved Spatial Coherencein Object Tracking

Desmond Chung, W.J. MacLean, S. Dickinson
2004 Conference on Computer Vision and Pattern Recognition Workshop  
This paper describes a novel method for performing spatially coherent motion estimation by integrating region and boundary information. The method begins with a layered, parametric flow model. Since the resulting flow estimates are typically sparse, we use the computed motion in a novel way to compare intensity values between images, thereby providing improved spatial coherence of a moving region. This dense set of intensity constraints is then used to initialize an active contour, which is
more » ... ntour, which is influenced by both motion and intensity data to track the object's boundary. The active contour, in turn, provides additional spatial coherence by identifying motion constraints within the object boundary and using them exclusively in subsequent motion estimation for that object. The active contour is therefore automatically initialized once and, in subsequent frames, is warped forward based on the motion model. The spatial coherence constraints provided by both the motion and the boundary information act together to overcome their individual limitations. Furthermore, the approach is general, and makes no assumptions about a static background and/or a static camera. We apply the method to image sequences in which both the object and the background are moving. Previous Work Previous work can be divided into region-based approaches [22, 21, 5, 20, 13, 9, 18, 10] and boundarybased approaches [11, 2, 17] . Among the region-based approaches, some [5, 20, 13, 18, 10] can be classified as layered approaches, with the latter two using models to describe image regions. Our own layered motion technique is
doi:10.1109/cvpr.2004.370 dblp:conf/cvpr/ChungMD04 fatcat:qd4jztdy2zcexdgyzevd4zkane