VMorph: Motion and Feature-Based Video Metamorphosis Robert Szewczyk Andras Ferencz Henry Andrews
Automatic Video Morphing Currently, to do a video morph, a user must: manually enter feature lines in a number of key frames linearly interpolate to find the feature lines in the remaining frames apply still morphing In order to reduce the amount of user input and time, and improve morphing quality, our algorithm: Aggregates the feature lines into groups Tracks the groups of feature lines Morphs foreground and background separately
Groups of Feature Lines Ensure that lines within the group stay in the same relation to one another Motion of a group of feature lines is described by an affine transform No loss of flexibility: animator groups the lines, grouping ranges from a single line per group to all feature lines in a single group
Feature Line Group Tracking Compute the dense motion field MF using census transform need a dense motion field, since feature lines do not correspond to any edges Translate all the points in a feature line group using MF allow for disappearance of any feature line Fit an affine transform T to the translated points currently: General Least Mean Squares, with a bit of robustness (discard a percentage of largest outliers), future: General Least Median Squares Apply T to the feature line group to find the new location of each line in the group
Image Segmentation Observation: foreground and background are distinct and move differently When morphed simultaneously, different parts of the image influence each other. Conclusion: they should be morphed separately still morphing is not aware of image segmentation for video sequences, the segmentation can be done automatically Background: use salient stills, reduce the problem to still image morphing Foreground: morphing with a mask
Morphing With a Mask Create Mask Add Still Morph mask Mask Still
Background Reconstruction
Feature Line Tracking Results
Morphing results: VMorph vs. Still Image Morphing