Unsupervised video segmentation using temporal coherence of motion
Alsaaran, Hessah, author
Draper, Bruce A., advisor
Beveridge, J. Ross, advisor
Whitley, Darrell, committee member
Peterson, Christopher, committee member
Spatio-temporal video segmentation groups pixels with the goal of representing moving objects in scenes. It is a difficult task for many reasons: parts of an object may look very different from each other, while parts of different objects may look similar and/or overlap. Of particular importance to this dissertation, parts of non-rigid objects such as animals may move in different directions at the same time. While appearance models are good for segmenting visually distinct objects and traditional motion models are good for segmenting rigid objects, there is a need for a new technique to segment objects that move non-rigidly. This dissertation presents a new unsupervised motion-based video segmentation approach. It segments non-rigid objects based on motion temporal coherence (i.e. the correlations of when points move), instead of motion magnitude and direction as in previous approaches. The hypothesis is that although non-rigid objects can move their parts in different directions, their parts tend to move at the same time. In the experiments, the proposed approach achieves better results than related state-of-the-art approaches on a video of zebras in the wild, and on 41 videos from the VSB100 dataset.
Includes bibliographical references.
temporal coherence of motion