Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 Genesis of Image Space NPR.

Slides:



Advertisements
Similar presentations
15.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 15 – Visible Surfaces and Shadows.
Advertisements

8.1si31_2001 SI31 Advanced Computer Graphics AGR Lecture 8 Polygon Rendering.
Graphics Pipeline.
2 COEN Computer Graphics I Evening’s Goals n Discuss the fundamentals of lighting in computer graphics n Analyze OpenGL’s lighting model n Show.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
3D Graphics Rendering and Terrain Modeling
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
Part I: Basics of Computer Graphics
Non-Photo Realistic Rendering Jian Huang CS594 Fall 2003 This set of slides are modified from the NPR Course during SIGGRAPH’2003.
Chapter 6: Vertices to Fragments Part 2 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley Mohan Sridharan Based on Slides.
CS6500 Adv. Computer Graphics © Chun-Fa Chang, Spring 2003 Object-Order vs. Screen-Order Rendering April 24, 2003.
Computer Graphics 14: Surface Detection Methods
1 Jinxiang Chai CSCE 441 Computer Graphics. 2 Midterm Time: 10:10pm-11:20pm, 10/20 Location: HRBB 113.
IAT 3551 Computer Graphics Overview Color Displays Drawing Pipeline.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
Vertices and Fragments III Mohan Sridharan Based on slides created by Edward Angel 1 CS4395: Computer Graphics.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Week 14 - Wednesday.  What did we talk about last time?  Collision handling  Collision detection  Collision determination  Collision response  BSPs.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
CSS552 Final Project Demo Peter Lam Tim Chuang. Problem Statement Our goal is to experiment with different post rendering effects (Cel Shading, Bloom.
Computer Graphics Shadows
Hidden Surface Removal
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics Lecture 1 July 11, Computer Graphics What do you think of? The term “computer graphics” is a blanket term used to refer to the.
A Non-Photorealistic Model for Automatic Technical Illustration Amy Gooch Bruce Gooch Peter Shirley Elaine Cohen SIGGRAPH 1998 Presented by Anteneh.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
1 10/24/ :01 UML Graphics II Shadows Session 4.
INT 840E Computer graphics Introduction & Graphic’s Architecture.
2 COEN Computer Graphics I Evening’s Goals n Discuss application bottleneck determination n Discuss various optimizations for making programs execute.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
1Computer Graphics Implementation II Lecture 16 John Shearer Culture Lab – space 2
Implementation II Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
03/28/03© 2005 University of Wisconsin NPR Today “Comprehensible Rendering of 3-D Shapes”, Takafumi Saito and Tokiichiro Takahashi, SIGGRAPH 1990 “Painterly.
Realtime NPR Toon and Pencil Shading Joel Jorgensen May 4, 2010.
Implementation II.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Ramesh Raskar University of North Carolina at Chapel Hill Ramesh Raskar University of North Carolina at Chapel Hill Image Precision Silhouette Edges Michael.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Cel shading By jared brock.
Handle By, S.JENILA AP/IT
Computing & Information Sciences Kansas State University Lecture 12 of 42CIS 636/736: (Introduction to) Computer Graphics CIS 636/736 Computer Graphics.
Computer Graphics I, Fall 2010 Implementation II.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Non-Photorealistic Rendering FORMS. Model dependent Threshold dependent View dependent Outline form of the object Interior form of the object Boundary.
1 CSCE 441: Computer Graphics Lighting Jinxiang Chai.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
1 Computer Graphics Week11 : Hidden Surface Removal.
Non-Photorealistic Rendering FORMS. Model dependent Threshold dependent View dependent Outline form of the object Interior form of the object Boundary.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Mohammed AM Dwikat CIS Department Digital Image.
1 CSCE 441: Computer Graphics Lighting Jinxiang Chai.
Digital Media Dr. Jim Rowan ITEC 2110 Vector Graphics II.
The Elements of Art Hartsville High School Art I-IV Mr Joyce.
Computer Graphics Overview
Computer Graphics Implementation II
Week 14 - Wednesday CS361.
Shading To determine the correct shades of color on the surface of graphical objects.
Computer Graphics Chapter 9 Rendering.
3D Graphics Rendering PPT By Ricardo Veguilla.
CSCE 441: Computer Graphics Hidden Surface Removal
Implementation II Ed Angel Professor Emeritus of Computer Science
Dr. Jim Rowan ITEC 2110 Vector Graphics II
Introduction to Computer Graphics with WebGL
Implementation II Ed Angel Professor Emeritus of Computer Science
Artistic Rendering Final Project Initial Proposal
Presentation transcript:

Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 Genesis of Image Space NPR

Operations on G-buffers to extract certain properties  various images Combine these images with rendered images Image space algorithms Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 G-buffers ?

Computer-Generated Images Special kind of recording equipment yields special images x-ray images thermal images sonar images

G-buffers Translate this approach to computer graphics Render algorithms to create images that show scene properties normally hidden to the viewer object ID distance to view plane surface normal patch coordinates (u,v) for spline surfaces … G-buffers (geometric buffers)

Pixel color now encodes 3D information and not just illuminationPixel color now encodes 3D information and not just illumination Reveal information about the underlying geometryReveal information about the underlying geometry Operations on G-buffersOperations on G-buffers combinationcombination edge detectionedge detection … G-buffers

eveal RGB-buffer

eveal Object ID-buffer

eveal Depth-buffer

eveal Normal-buffer

eveal RGB-buffer

eveal Object ID-buffer

eveal Depth-buffer

eveal Normal-buffer

Process pixel (x,y)

Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 Data structures + algorithms: Drawing discontinuities, edges, contour lines, curved hatching from the image buffer Edge classification : Profile - - the border line of an object on the screen Internal - - a line where two faces meet. Images generated: 1.Depth 2.First-order differential 3.Second-order differential 4.Profile 5.Internal edge

Depth Image Distance: viewpoint to screenDepth of object (eye coordinate) One pixel length (eye coordinate) Grayscale image that maps [ d min, d max ] to [0, 255] Shaded image Depth image OpenGL  depth image content extracted by glReadPixels with GL_DEPTH_COMPONENT Equalizes the gradient value of depth image with the slope of the surface.

Depth Image Grayscale image that maps [ d min, d max ] to [0, 255]

Depth image First-order differential Sobel’s filter Second-order differential

Profile image Internal edge image Normalization of images Distinguishes discontinuities from continuous changes Limit of the gradient for the elimination of 0 th order discontinuities

OID (Object ID) Image

Operations on G-buffers (so far…)  Edge detection RGB-buffer  discontinuities in brightness (illumination), i.e., shadows, material, objects z-buffer  discontinuities in depth, i.e. Object boundaries, also boundaries within one object (creases) OID-buffer  discontinuities in “objects”, i.e., object silhouettes

Schofield, S.. Non-photorealistic Rendering: A critical examination and proposed system PhD thesis, School of Art and Design, Middlesex University, May

We can augment the silhouette edges computed with the depth map by using surface normals as well. We will do this by using a normal map, which is an image that represents the surface normal at each point on an object. The values in each of the (R; G;B) color components of a point on the normal map correspond to the (x; y; z) surface normal at that point. Depth map Normal map Decaudin, P. Cartoon-looking rendering of 3d-scenes. Research Report #2919, INRIA Rocquencourt Using Normal Maps to Find Creases and Boundaries

To compute the normal map for an object with a graphics package: First, we set the object color to white, and the material property to diffuse reflection. We then place a red light on the X axis, a green light on the Y axis, and a blue light on the Z axis, all facing the object. Additionally, we put lights with negative intensity on the opposite side of each axis. We then render the scene to produce the normal map. Each light will illuminate a point on the object in proportion to the dot product of the surface normal with the light’s axis. An example is shown in Figure (c,d). We can then detect edges in the normal map. These edges detect changes in surface orientation, and can be combined with the edges of the depth map to produce a reasonably good silhouette image (Figure (e)).

Outline drawing with image processing. (a) Depth map. (b) Edges of the depth map. (c) Normal map. (d) Edges of the normal map. (e) The combined edge images.

Outline detection of a more complex model. (a) Depth map. (b) Depth map edges. (c) Normal map. (d) Normal map edges. (e) Combined depth and normal map edges.

Creates visible silhouette edges with constant thickness at the same depth value as the corresponding polygon edge Works well when dihedral angle between the adjacent front- and back- facing is not large As the line width increase, gaps may occur between silhouette edges Rossignac, J. and van Emmerik, M. Hidden contours on a frame-buffer Proc. of the 7th Eurographics Workshop on Computer Graphics Hardware, 1992.

1.Fill background with white 2.Enable back-face culling, set depth function to “Less Than”” 3.Render front-face polygons in white 4.Enable front-face culling, set depth function to “Less Than or Equal To” 5.In black, draw back-facing polygons in wire-frame mode. 6.Repeat for a new viewpoint Rossignac, J. and van Emmerik, M. Hidden contours on a frame-buffer Proc. of the 7th Eurographics Workshop on Computer Graphics Hardware, 1992.