Detail to attention: Exploiting Visual Tasks for Selective Rendering Kirsten Cater 1, Alan Chalmers 1 and Greg Ward 2 1 University of Bristol, UK 2 Anyhere.

Slides:



Advertisements
Similar presentations
A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Advertisements

Effect of Opacity of Stimulus in Deployment of Interest in an Interface Sujoy Kumar Chowdhury & Jeremiah D. Still Missouri Western State University Introduction.
Psych 101 for Designers Interaction Design. Interaction Design is about people first. What motivates people? How do people think? How do people behave?
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Chapter 6: Visual Attention. Overview of Questions Why do we pay attention to some parts of a scene but not to others? Do we have to pay attention to.
Application: driving Gemma Briggs
School of Computer Science and Software Engineering Design Issues in Human Visual Perception Experiments on Region Warping Monash University Yang-Wai Chow.
Scalability with many lights II (row-column sampling, visibity clustering) Miloš Hašan.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Chapter 6: Visual Attention. Scanning a Scene Visual scanning – looking from place to place –Fixation –Saccadic eye movement Overt attention involves.
Chapter 6: Visual Attention. Scanning a Scene Visual scanning – looking from place to place –Fixation –Saccadic eye movement Overt attention involves.
UVA / UNC / JHU Perceptually Guided Simplification of Lit, Textured Meshes Nathaniel WilliamsUNC David LuebkeUVA Jonathan D. CohenJHU Michael KelleyUVA.
Current Trends in Image Quality Perception Mason Macklem Simon Fraser University
1 Perception Presented by: Meghan Allen. 2 Papers Perceptual and Interpretative Properties of Motion for Information Visualization, Lyn Bartram. Perceptual.
1 3 Processes of Pattern Recognition Sensation – you have to detect or see the pattern Perception – you have to organize the features into a whole Memory.
Representation of statistical properties 作 者: Sang Chul Chong, Anne Treisman 報告者:李正彥 日 期: 2006/3/23.
Introduction to Image Quality Assessment
Visual Attention More information in visual field than we can process at a given moment Solutions Shifts of Visual Attention related to eye movements Some.
Perception By Sherman Lai CPSC 533 fall ‘06. Papers Presented Level of detail: Varying rendering fidelity by exploiting human change blindness. Kirsten.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
The Table Lens: Merging Graphical and Symbolic Representations in an Interactive Focus + Context Visualization for Tabular Information R. Rao and S. K.
CS 664, Session 19 1 General architecture. CS 664, Session 19 2 Minimal Subscene Working definition: The smallest set of objects, actors and actions in.
1 Keeping Track: Coordinating Multiple Representations in Programming Pablo Romero, Benedict du Boulay & Rudi Lutz IDEAs Lab, Human Centred Technology.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
QUASID: Quantifying 3D Image Perception in VR Ferdi Smit Center for Mathematics and Computer Science (CWI) Amsterdam.
Attention. Looking without Seeing.
What is Cognitive Science? … is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience,
Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.
Verification of the Change Blindness Phenomenon While Managing Critical Events on a Combat Information Display 作 者: Joseph DiVita et al. 報告者:李正彥 日 期: 2006/4/27.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
1 Perception and VR MONT 104S, Spring 2008 Lecture 22 Other Graphics Considerations Review.
Computer graphics & visualization REYES Render Everything Your Eyes Ever Saw.
Krzysztof Templin 1,2 Piotr Didyk 2 Tobias Ritschel 3 Elmar Eisemann 3 Karol Myszkowski 2 Hans-Peter Seidel 2 Apparent Resolution Enhancement for Animations.
Manipulating Attention in Computer Games Matthias Bernhard, Le Zhang, Michael Wimmer Institute of Computer Graphics and Algorithms Vienna University of.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.
1 Perception and VR MONT 104S, Fall 2008 Session 13 Visual Attention.
Sebastian Enrique Columbia University Relighting Framework COMS 6160 – Real-Time High Quality Rendering Nov 3 rd, 2004.
Realtime 3D model construction with Microsoft Kinect and an NVIDIA Kepler laptop GPU Paul Caheny MSc in HPC 2011/2012 Project Preparation Presentation.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Assessment of Computational Visual Attention Models on Medical Images Varun Jampani 1, Ujjwal 1, Jayanthi Sivaswamy 1 and Vivek Vaidya 2 1 CVIT, IIIT Hyderabad,
Eye-Based Interaction in Graphical Systems: Theory & Practice Part III Potential Gaze-Contingent Applications.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
Silicon Graphics, Inc. The Holodeck Interactive Ray Cache Greg Ward, Exponent Maryann Simmons, UCB.
Image Restoration Chapter 5.
1 Computational Vision CSCI 363, Fall 2012 Lecture 36 Attention and Change Blindness (why you shouldn't text while driving)
Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,
Region-Based Saliency Detection and Its Application in Object Recognition IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY, VOL. 24 NO. 5,
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Hybrid Algorithms K. H. Ko School of Mechatronics Gwangju Institute.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
LOGO Change blindness in the absence of a visual disruption Professor: Liu Student: Ruby.
 Example: seeing a bird that is singing in a tree or miss a road sign in plain sight  Cell phone use while driving reduces attention and memory for.
03/04/05© 2005 University of Wisconsin Last Time Tone Reproduction –Histogram method –LCIS and improved filter-based methods.
Picture change during blinks: looking without seeing and seeing without looking Professor: Liu Student: Ruby.
Optimal Eye Movement Strategies In Visual Search.
Saving Bitrate vs. Users: Where is the Break-Even Point in Mobile Video Quality? ACM MM’11 Presenter: Piggy Date:
A computational model of stereoscopic 3D visual saliency School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
6.S196 / PPAT: Principles and Practice of Assistive Technology Wed, 19 Sept Prof. Rob Miller Today: User-Centered Design [C&H Ch. 4]
A Framework for Perceptual Studies in Photorealistic Augmented Reality Martin Knecht 1, Andreas Dünser 2, Christoph Traxler 1, Michael Wimmer 1 and Raphael.
ICCV 2009 Tilke Judd, Krista Ehinger, Fr´edo Durand, Antonio Torralba.
Perceptual Blindness and the MBM Problem Solving Framework
Real-Time Soft Shadows with Adaptive Light Source Sampling
Graphics and Human Perception
Authoring Directed Gaze for Full-Body Motion Capture
Progressive Photon Mapping Toshiya Hachisuka Henrik Wann Jensen
Presentation transcript:

Detail to attention: Exploiting Visual Tasks for Selective Rendering Kirsten Cater 1, Alan Chalmers 1 and Greg Ward 2 1 University of Bristol, UK 2 Anyhere Software, USA

Contents of Presentation Introduction – Realistic Computer Graphics Flaws in the Human Visual System Magic Trick! Previous Work Counting Teapots Experiment Perceptual Rendering Framework Example Animation Conclusions

Realistic Computer Graphics Major challenge achieve realism at interactive rates Major challenge achieve realism at interactive rates How do we keep computational costs realistic? How do we keep computational costs realistic?

The Human Visual System Good but not perfect! Flaws in the human visual system: Change Blindness Inattentional Blindness Avoid wasting computational time

Magic trick to Demonstrate Inattentional Blindness Please choose one of the six cards below. Focus on that card you have chosen.

Magic trick (2) Can you still remember your card?

Magic trick (3) Here are the remaining five cards, is your card there? Did I guess right? Or is it an illusion?

Magic trick – The Explanation You just experienced Inattentional Blindness None of the original six cards was displayed!

Previous Work Bottom-up Processing – Stimulus Driven Perceptual Metrics – Daly, Myszkowski et al., etc. Peripheral Vision Models – McConkie, Loschky et al., Watson et al., etc. Saliency Models – Yee et al., Itti & Koch, etc.

Top-down processing – Task Driven Yarbus (1967) recorded observers’ fixations and saccades whilst answering specific questions given to them about the scene they were viewing. Saccadic patterns produced were dependent on the question that had been asked. Theory – if we don’t perceive parts of the scene what’s the point rendering it to such a high level of fidelity?!

Counting Teapots Experiment Inattentional Blindness – more than just peripheral vision methods – don’t notice an object’s quality if it’s not related to task at hand, even if observers fixate on it. Top-down processing i.e. task driven unlike saliency models which are stimulus driven, bottom-up processing. Experiment: Get participants to count teapots for two images then ask questions on any observed differences between the images. Also jogged participants’ memory getting them to choose which image they saw from a choice of two.

Counting Teapots Experiment 3072 x x 1024

Counting Teapots Experiment – Pre-Exp to find the Detectable Resolution

Counting Teapots Experiment

Eye Tracking Verification

Eye Tracking Verification with VDP

Counting Teapots Conclusion Failure to distinguish difference in rendering quality between the teapots (selectively rendered to high quality) and the other low quality objects is NOT due to purely peripheral vision effects. The observers fixate on the low quality objects, but because these objects are not relevant to the task at hand they fail to notice the reduction in rendering quality!

Perceptual Rendering Framework Use results/theory from experiment to design a “Just in time” animation system. Exploits inattentional blindness and IBR. Generalizes to other rendering techniques Demonstration system uses Radiance Potential for real-time applications Error visibility tied to attention and motion.

Input: Task Geometry Lighting View High-level Vision Model Geometric Entity Ranking Object Map & Motion Lookup Task Map Current Frame & Error Estimate Frame Ready? Output Frame Contrast Sensitivity Model Error Conspicuity Map Refine Frame No Yes Last Frame Iterate Rendering Framework First Order Render

Example Frame w/ Task Objects

Example Animation The following animation was rendered at two minutes per frame on a 2000 model G3 laptop computer. Many artifacts are intentionally visible, but less so if you are performing the task. DVD Main Bonus

Error Map Estimation Stochastic errors may be estimated from neighborhood samples. Systematic error bounds may be estimated from knowledge of algorithm behavior. Estimate accuracy is not critical for good performance.

Initial Error Estimate

Image-based Refinement Pass Since we know exact motion, IBR works very well in this framework. Select image values from previous frame Criteria include coherence, accuracy, agreement Replace current sample and degrade error Error degradation results in sample retirement

Contrast Sensitivity Model Additional samples are directed based on Daly’s CSF model: where:  is spatial frequency v R is retinal velocity

Error Conspicuity Model Retinal velocity depends on task-level saliency:

Error Conspicuity Map

Implementation Example Compared to a standard rendering that finished in the same time, our framework produced better quality on task objects. Rendering the same high quality over the entire frame would take about 7 times longer using the standard method. Framework rendering Standard rendering

Overall Conclusion By taking advantage of flaws in the human visual system we can dramatically save on computational time by selectively rendering the scene where the user is not attending. Thus for VR applications where the task is known a priori the computational savings, by exploiting these flaws, can be dramatic.

The End! Thank you