Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.

Slides:



Advertisements
Similar presentations
Visible-Surface Detection(identification)
Advertisements

15.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 15 – Visible Surfaces and Shadows.
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
8.1si31_2001 SI31 Advanced Computer Graphics AGR Lecture 8 Polygon Rendering.
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Graphics Pipeline.
3D Graphics Rendering and Terrain Modeling
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
HCI 530 : Seminar (HCI) Damian Schofield.
(conventional Cartesian reference system)
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
Modeling and representation 1 – comparative review and polygon mesh models 2.1 Introduction 2.2 Polygonal representation of three-dimensional objects 2.3.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Shading & Texture. Shading Flat Shading The process of assigning colors to pixels. Smooth Shading Gouraud ShadingPhong Shading Shading.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
Programming 3D Applications CE Displaying Computer Graphics Week 3 Lecture 5 Bob Hobbs Faculty of Computing, Engineering and Technology Staffordshire.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Implementation II.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
1 Georgia Tech, IIC, GVU, 2006 MAGIC Lab Rossignac Graphic pipeline  Scan-conversion algorithm (high level)  Pixels.
Local Illumination and Shading
Computer Graphics I, Fall 2010 Implementation II.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Chapter 1 Graphics Systems and Models Models and Architectures.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Applications and Rendering pipeline
Chapter 10: Computer Graphics
Visible Surface Detection
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Computer Graphics Implementation II
Computer Graphics Chapter 9 Rendering.
Intro to 3D Graphics.
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Chapter 14 Shading Models.
Models and Architectures
CSCE 441: Computer Graphics Hidden Surface Removal
Models and Architectures
Introduction to Computer Graphics with WebGL
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University Transformations.
Implementation II Ed Angel Professor Emeritus of Computer Science
The Graphics Pipeline Lecture 5 Mon, Sep 3, 2007.
Chapter V Vertex Processing
Lecture 13 Clipping & Scan Conversion
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
Chapter 14 Shading Models.
Implementation II Ed Angel Professor Emeritus of Computer Science
Presentation transcript:

Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons One approach to rendering three-dimensional objects is to build a basic renderer then add on enhancements The basic renderer may be one which incorporates a local reflection model, such as the Phong model, into a Phong incremental shader

Computer Graphics Advantages gained by this approach include: Modeling objects using polygons is straighforward Piecewise linearities are rendered invisible by the shading technique Geometric information is only stored at the polygonal vertices - information required for the reflection model that evaluates a shade at each pixel is interpolated from this information This allows fast, hardware-based shading

Computer Graphics More accurate representations made up of a set of bicubic patches can be converted to a polygon representation and fed to such a renderer One drawback of using polygons to model objects is that a large number of polygons are required to achieve much detail for complex objects

Computer Graphics The main steps in rendering a polygonal object are: 1. Polygons representing an object are extracted from the database and transformed into the world coordinate system using linear transformations such as translation and scaling 2. A scene constructed in this way is transformed into a coordinate system based on a view point or view direction. 3. The polygons are then subjected to a visibility test. This is called ‘backface elimination’ or ‘culling’ and removes those polygons that face away from the viewer.

Computer Graphics The main steps in rendering a polygonal object are: 4. Unculled polygons are clipped against a three-dimensional view volume. 5. Clipped polygons are then projected onto a view plane or image plane 6. Projected polygons are then shaded by an incremental shading algorithm. First, the polygon is rasterized, or those pixels that the edges of the polygon contain are determined. Second, a depth for each pixel is evaluated and a hidden surface calculation is performed. Third, the polygon is shaded.

Polygonal representation of three-dimensional objects Objects that possess curved surfaces have these surfaces approximated by polygonal facets The error introduced by this representation may be visually diminished by using interpolative shading algorithms

Polygonal Representation of Curved Objects

Polygonal representation of three-dimensional objects For complex objects, a number of polygons in excess of 100,000 is not uncommon Another problem occurs when objects are scaled up An object adequately represented at one size may degrade when the object is enlarged This has been called ‘geometric aliasing’

Polygonal representation of three-dimensional objects Polygonal representations can be generated manually from a designer’s abstraction, or automatically from real objects by using devices such as laser rangers in conjunction with the appropriate software Complete information necessary to shade a polygon is usually stored in a hierarchical data structure - objects; surfaces; vertices and normals

Coordinate systems and rendering One view of the geometric part of the rendering process is that it consists of the application of a series of coordinate transformations that takes an object database through a series of coordinate systems For ease of modeling and application of local transformations, it makes sense to store the vertices of an object with respect to some point conveniently located in or near the the object This is called the local coordinate system

Coordinate systems and rendering Once an object has been modeled, the next stage is to place it in the scene that we wish to render The global coordinate system of the scene is known as the world coordinate system All the objects have to be placed into this common space in order to have their relative spatial relationships defined The act of placing an object in a scene defines the transformation required to take the object from local space to global space If this object is being animated, then the animation provides a time-varying transformation that takes the object into world space on a frame-by-frame basis

Coordinate systems and rendering The scene is lit in world space Light sources are specified The eye or camera coordinate system is a a space used to establish viewing parameters and view volume A virtual camera can be positioned anywhere in the world space and can point in any direction The scene is projected on a view plane

Backface elimination and culling This operation removes entire polygons that face away from the viewer When dealing with a single convex object, culling completely solves the hidden surface problem If an object contains a concavity, or if we have multiple objects in a scene, a general hidden surface removal algorithm is needed as well as culling We can determine whether a polygon is visible from a view point by a simple geometric test The geometric normal to the polygon is calculated and the angle between this and the line-of-sight vector is determined. (The line-of-sight vector is the vector from the polygon to the view point) If this angle is greater than 90 degrees, then the polygon is invisible.

Screen space The fundamental transformation that takes us into screen space is the perspective transformation which takes a point in the scene and projects it onto a view plane positioned at disance D away from the view point and oriented normal to the viewing direction

Perspective Transformation

Screen Space Screen space is defined to act within a closed volume called the viewing frustum that delineates the volume of space which is to be rendered Objects that lie outside the viewing frustum are not rendered

View Frustum

Homogeneous Coordinates A point in three dimensional space requires 3 coordinates - x, y and z - to describe its location In homoegeneous coordinates, an extra coordinate (which can be thought of as a scaling term) is added By using homogeneous coordinates, we are able to represent transformations as matrix multiplications Then in order to compute the transformation of some object, we need only perform matrix multiplications for the vertices of the polygons making up the object’s representation

Homogeneous Coordinates The transformations we are most interested in are scaling, translation and rotation These can be defined as follows

Scaling Transformation

Scaling Transformation

Translation Transformation

Translation Transformation

Rotation About the X-Axis

Rotation About the X-Axis

Homogeneous Coordinates Rotations about the y and z axis are similar A series of operations can be combined by performing a series of multiplications If the scaling factor is greater than one, the resulting coordinates must be divided by that number at the end of the operation

Perspective Transformation The near plane is the view plane at distance D. The far plane is at distance F If the height of the view plane is 2h, then the perspective transformation can be given by the following matrix: 1 0 0 0 0 1 0 1 0 0 hF/(D(F-D)) h/D 0 0 -hF/(F-D) 0

Perspective Transformation This transformation can simply be concatenated with all of the other transformations in the rendering pipeline Clipping is performed against the viewing frustum Objects lying completely outside the frustum are discarded Objects inside the frustum are transformed into screen space and then rendered Objects intersecting the frustum are clipped and then transformed into screen space

Pixel-Level Processes Once the objects in a scene have been translated into screen space, the processing is oriented towards pixels rather than polygons Rasterization, hidden-surface removal and shading are all pixel-level processes

Rasterization Rasterization is the process of finding out which pixels a polygon projects onto in screen space Calculate, for each scan line within a polygon, by linear interpolation, the x values of the edge pixels, xstart and xend This results in a span of pixels between two edges that have to be rendered Care must be taken in these calculations, otherwise we will have aliasing problems (e.g. a straight line may appear to be a series of steps) Most of these problems are caused by rounding errors in the interpolation of polygon edges

Hidden Surface Removal We now know which pixels contain which objects, however since some pixels may contain two or more objects we must calculate which of these objects is visible and which are hidden Hidden surface removal is generally accomplished using the Z-buffer algorithm

Hidden Surface Removal In this algorithm, we set aside a two-dimensional array of memory (the Z-buffer) of the same size as the screen (#rows x #columns) This is in addition to the buffer we will use to store the values of pixels which will be displayed (color values) The Z-buffer will hold values which are depths (or z-values) The buffer is initialized so that each element has the value of the far clipping plane (the largest possible z-value after clipping has been performed) The other buffer is initialized so that each element contains a value which is the background color

Hidden Surface Removal Now for each polygon we have a set of pixel values which that polygon covers For each one of these pixels, we compare its interpolated depth (z-value) with the value of the corresponding element already stored in the Z-buffer If this value is less than the previously stored value, the pixel is nearer the viewer than previously encountered pixels Replace the old value of the Z- buffer with the new, interpolated value and replace the old value of the other buffer with the value (color) of the pixel Repeat for all polygons in the image

Interpolative or Incremental Shading

Interpolative or Incremental Shading A major reason for the growth of popularity of polygon-based renderers is the existence of two interpolative or incremental shading algorithms usually known as Gouraud shading and Phong shading Gouraud shading is faster than the other method, but cannot produce accurate highlights. It is usually used in applications where speed is important

Interpolative or Incremental Shading Phong shading gives higher quality images, but is more expensive Gouraud shading calculates intensities at polygon vertices only, using a local reflection model, and interpolates these for pixels within the polygon Phong shading interpolates vertex normals and applies a local reflection model at each pixel

Interpolative or Incremental Shading The motivation of both schemes is efficiency and the ‘convincing’ rendering of polygonal objects This means that as well as giving an impression of solidity or three-dimensionality to the model, the shared edges in adjacent polygons that approximate a curved surface, are made invisible

Gouraud Shading Generally, the intensity of light at a polygon vertex is described as a function of the orientation of vertex normals with respect to both the light source and the eye This function is called a reflection model Gouraud shading applies a local reflection model at each vertex of a polygon to calculate a set of vertex intensities These intensities are linearly interpolated across the polygon interior on a scan line basis

Phong Shading In Phong shading, the vertex normals (calculated exactly as in Gouraud shading) are linearly interpolated The local reflection model is then applied at each pixel using the interpolated normal This solves the highlight problem and Mach banding, but is more computationally expensive