Creating the Illusion of Motion in 2D Images. Reynold J. Bailey & Cindy M. Grimm Goal To manipulate a static 2D image to produce the illusion of motion.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

A static depiction and input technique for 2D animations Shin Takahashi (Univ. of Tsukuba) Yoshikazu Kato (Tokyo Tech) Etsuya Shibayama (Tokyo Tech)
Image Repairing: Robust Image Synthesis by Adaptive ND Tensor Voting IEEE Computer Society Conference on Computer Vision and Pattern Recognition Jiaya.
Texture. Limitation of pixel based processing Edge detection with different threshold.
Adobe Photoshop 6 Intermediate Level Course. Resizing When resizing an image for printing, a higher resolution will translate into a sharper image with.
In the name of God Computer Graphics Bastanfard.
ROTATION ..
Accelerating Spatially Varying Gaussian Filters Jongmin Baek and David E. Jacobs Stanford University.
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
School of Computer Science and Software Engineering Large Object Segmentation and Region Priority Rendering Monash University Yang-Wai Chow Dr Ronald Pose.
INTRODUCTION. Painting with numbers! Aspects Modeling Rendering Animation.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Image Quilting for Texture Synthesis and Transfer Alexei A. Efros1,2 William T. Freeman2.
Probabilistic video stabilization using Kalman filtering and mosaicking.
8. Geometric Operations Geometric operations change image geometry by moving pixels around in a carefully constrained way. We might do this to remove distortions.
Texture Synthesis on Surfaces Paper by Greg Turk Presentation by Jon Super.
Stereo Computation using Iterative Graph-Cuts
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Processing Image and Video for An Impressionist Effect Peter Litwinowicz Apple Computer, Inc. Siggraph1997.
Computer Vision Lecture 3: Digital Images
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Painterly Rendering for Animation Barbara J. Meier Walt Disney Feature Animation SIGGRAPH 96.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Input: Original intensity image. Target intensity image (i.e. a value sketch). Using Value Images to Adjust Intensity in 3D Renderings and Photographs.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
1 Perception and VR MONT 104S, Spring 2008 Lecture 22 Other Graphics Considerations Review.
Chapter 3: Image Restoration Geometric Transforms.
Tutorial 2 Drawing Text, Adding Shapes, and Creating Symbols.
Image Based Rendering(IBR) Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang University, Hangzhou, China
Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Processing Images and Video for an Impressionist Effect Author: Peter Litwinowicz Presented by Jing Yi Jin.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
CS 4487/6587 Algorithms for Image Analysis
EECS 274 Computer Vision Segmentation by Clustering II.
Computer Vision, Robert Pless
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Sky Boxes and Vector Math 2 Course Information CVG: Programming 4 My Name: Mark Walsh Website:
Orientable Textures for Image- Based Pen-And-Ink Illustration Michael P. Salisbury Michael T. Wong John F. Hughes David A. Salesin SIGGRAPH 1997 Andrea.
Arrangements and Duality Motivation: Ray-Tracing Fall 2001, Lecture 9 Presented by Darius Jazayeri 10/4/01.
XP Tutorial 2 Drawing, Adding Text, and Creating Symbols.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Technological Uncanny K. S'hell, C Kurtz, N. Vincent et E. André et M. Beugnet 1.
Motion and optical flow
Lecture 5 Basic geometric objects
- photometric aspects of image formation gray level images
Image-Based Rendering
Common Classification Tasks
Computer Vision Lecture 3: Digital Images
WINDOWING AND CLIPPING
Iterative Optimization
Graphic Editing Terms Cropping
Patric Perez, Michel Gangnet, and Andrew Black
Presented by :- Vishal Vijayshankar Mishra
WINDOWING AND CLIPPING
The visual perception of motion
Magnetic Resonance Imaging
Properties or Rules of Transformations
Intensity Transformation
Optical flow and keypoint tracking
Presentation transcript:

Creating the Illusion of Motion in 2D Images. Reynold J. Bailey & Cindy M. Grimm Goal To manipulate a static 2D image to produce the illusion of motion. Motivation Traditional artists have developed several techniques for creating the illusion of motion in paintings. One common approach involves the use of spatial imprecision (misalignment of brush strokes). Example of spatial imprecision to create the illusion of motion. Claude Monet, Rue Montorgueil - Paris, Festival of June 30, When we first look at such paintings, our fast acting, low acuity peripheral vision gives us a rough idea of where the brush strokes are in the scene. Mentally, we join these brush strokes together to form a complete picture. This process is called illusory conjunction. It is only upon closer scrutiny with our slower, high acuity foveal vision that we notice that the strokes are misaligned. The illusion of motion is created because our visual system completes the picture differently with every glance (explanation adapted from Vision and Art: The Biology of Seeing [Livingstone 2002]). Previous Work Animating Pictures with Stochastic Motion Textures. Chuang et al Cartoon Blur: Non- Photorealistic Motion Blur. Kawagishi et al Color Table Animation. Shoup Speedlines: Depicting Motion in Motionless Pictures. Masuch et al Method Our technique manipulates a static 2D image to produce the illusion of motion by introducing spatial imprecision. Preliminary Results These results were obtained by applying a random rotation (maximum 10 degrees) and a random translation (maximum 20 pixels) to each segment. Input images courtesy of flickr.com. Motion Without Movement. Freeman et al Segmentation: The input image is segmented into regions of roughly uniform color. Spatial Perturbation: The individual segments are spatially perturbed. Our technique consists of two steps: Image segmentation can be performed using any existing approach. Each segment is assigned a unique ID and a local Cartesian coordinate system whose origin corresponds to the upper-left corner of the bounding box of the segment. Each segment undergoes a 2D spatial transformation (a rotation and a translation) with respect to its local coordinate system. Perturbing segments in this manner introduces holes in the image plane. We experimented with various inpainting approaches for filling these holes, but found that simply pasting the perturbed segments onto the original image (or a Gaussian blurred version of the original image) gave better results. Future Work Instead of naively performing random perturbations, we plan to organize the perturbations to the individual segments to follow some structured spatial pattern: To some extent, this depends on the context of the image. The segments in an image depicting a windy scene, for instance, should in general be perturbed along the direction that the wind is blowing. From a user interface standpoint, the simplest way to specify the desired spatial pattern is to allow the user to sketch a rough flow field directly into the image plane. Another issue that arises when perturbing segments (especially with random perturbations) is that perceptually relevant segments (such as faces) may get (partially) covered by other segments. To eliminate / minimize this problem, we plan to extend our framework to allow users to specify the relative importance of the various segments. The most important segments will then be processed last. Input Output InputOutput Contact: