Components of illumination that from diffuse illumination (incident rays come from all over, not just one direction) - E d that from a point source which.

Slides:



Advertisements
Similar presentations
GR2 Advanced Computer Graphics AGR
Advertisements

SI23 Introduction to Computer Graphics
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
SI31 Advanced Computer Graphics AGR
3D Graphics Rendering and Terrain Modeling
Computer Graphics Inf4/MSc Computer Graphics Lecture 13 Illumination I – Local Models.
Virtual Realism LIGHTING AND SHADING. Lighting & Shading Approximate physical reality Ray tracing: Follow light rays through a scene Accurate, but expensive.
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
Light Issues in Computer Graphics Presented by Saleema Amershi.
Computer Graphics - Class 10
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2008 Tamara Munzner Lighting/Shading III Week.
Reflection & Mirrors.
IMGD 1001: Illumination by Mark Claypool
Computer Graphics (Fall 2005) COMS 4160, Lecture 16: Illumination and Shading 1
Hidden Line Removal Adopted from U Strathclyde’s course page.
Modelling. Outline  Modelling methods  Editing models – adding detail  Polygonal models  Representing curves  Patched surfaces.
7M836 Animation & Rendering
Objectives Learn to shade objects so their images appear three- dimensional Learn to shade objects so their images appear three- dimensional Introduce.
6.1 Vis_04 Data Visualization Lecture 6 - A Rough Guide to Rendering.
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Shading Week 5, Wed 1 Oct 2003 recap: lighting shading.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Course Website: Computer Graphics 16: Illumination.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
SET09115 Intro Graphics Programming
CS 445 / 645: Introductory Computer Graphics
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
CS 445 / 645 Introduction to Computer Graphics Lecture 18 Shading Shading.
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
COMP 261 Lecture 14 3D Graphics 2 of 2. Doing better than 3x3? Translation, scaling, rotation are different. Awkward to combine them. Use homogeneous.
Shading (introduction to rendering). Rendering  We know how to specify the geometry but how is the color calculated.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
Steve Sterley. Real World Lighting Physical objects tend to interact with light in three ways: Absorption (black body) Reflection (mirror) Transmission.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Basic Rendering Pipeline and Shading Spring 2012.
Programming 3D Applications CE Displaying Computer Graphics Week 3 Lecture 5 Bob Hobbs Faculty of Computing, Engineering and Technology Staffordshire.
Chris Mayer & Nic Shulver Hidden Surface Wire frame drawings Wire frame drawings are quick to produce but are often confusing It is difficult to determine.
Introduction to 3D Graphics Lecture 2: Mathematics of the Simple Camera Anthony Steed University College London.
Komputer Grafik 2 (AK045206) Shading 1/17 Realisme : Shading.
Illumination and Shading
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
CS 445 / 645 Introduction to Computer Graphics Lecture 15 Shading Shading.
RENDERING Introduction to Shading models – Flat and Smooth shading – Adding texture to faces – Adding shadows of objects – Building a camera in a program.
Lecture Fall 2001 Illumination and Shading in OpenGL Light Sources Empirical Illumination Shading Transforming Normals Tong-Yee Lee.
Local Illumination and Shading
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
Render methods. Contents Levels of rendering Wireframe Plain shadow Gouraud Phong Comparison Gouraud-Phong.
Visible surface determination. Problem outline Given a set of 3D objects and a viewing specification, we wish to determine which lines or surfaces are.
OpenGL Shading. 2 Objectives Learn to shade objects so their images appear three-dimensional Introduce the types of light-material interactions Build.
Lighting and Reflection Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
Mirrors and Images. Light Review A luminous object emits light (ex: the sun) An illuminated object reflects light (ex: the moon) For both, light emits/reflects.
David Luebke3/16/2016 CS 551 / 645: Introductory Computer Graphics David Luebke
Illumination Models. Introduction 1 Illumination model: Given a point on a surface, what is the perceived color and intensity? Known as Lighting Model,
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Computer Graphics Lecture 30 Mathematics of Lighting and Shading - IV Taqdees A. Siddiqi
Computer Graphics: Illumination
Illumination and Shading. Illumination (Lighting) Model the interaction of light with surface points to determine their final color and brightness OpenGL.
Computer Graphics Chapter 9 Rendering.
3D Graphics Rendering PPT By Ricardo Veguilla.
Chapter 14 Shading Models.
Computer Graphics Material Colours and Lighting
Computer Graphics (Fall 2003)
Chapter 14 Shading Models.
Illumination Model 고려대학교 컴퓨터 그래픽스 연구실.
Presentation transcript:

Components of illumination that from diffuse illumination (incident rays come from all over, not just one direction) - E d that from a point source which is scattered diffusely from the surface - E sd that from a point source which is specularly reflected. - E ss E = E d + E sd + E ss E

Combining the illumination Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression: E = E d + E sd + E ss

Diffuse illumination A proportion of light reaching surface is reflected back to the observer. This proportion is derived from the angle of incident light and properties of the surface Not dependent on the location of the viewer.

Diffuse illumination I d - incident illumination E d - observed intensity E d = R.I d where: –R is the reflection coefficient of the surface (0 <= R <=1) R is the proportion of the light that is reflected back out

Diffuse scattering from a point source When a light ray strikes a surface it is scattered diffusely (i.e. in all directions) Doesn’t change with the angle the observer is looking from

Diffuse scattering from a point source The intensity of the reflected rays is: E sd = R.cos(i).I s i - angle of incidence – the angle between the surface normal and the ray of light - 0 <= i <= 90 E sd - intensity of the scattered diffuse rays I s - intensity of the incident light ray. R - reflection coefficient of the surface (0 <= R <=1)

Specular reflection relationship between a ray from point source to the reflected ray is given by Lambert’s Law: i = r i - angle of incidence r - angle of reflection

Specular reflection For a perfect reflector, all the incident light would be reflected back out in the direction of S. In fact, when the light strikes a surface it is scattered diffusely (i.e. in all directions): For an observer viewing as an angle (s) to the reflection direction S, some fraction of the reflected light is still visible (due to the fact that the surface isn’t a perfect reflector - some degree of diffusion takes place). How much?

Specular reflection The proportion of light visible is a function of – the angle s (in fact it is proportional to cos(s) – the quality of the surface – angle of incidence i. We can define a coefficient w(i) the specular reflection coefficient - which is a function of the material of which the surface is made and of i. Each surface will have its own w(i).

Specular reflection coefficient

Specular reflection E ss = w(i).cos n (s).I s

Specular reflection E ss is the intensity of the light ray in the direction of O n is a fudge factor: n=1 rough surface (paper) n=10 smooth surface (glass) w(i) is usually never calculated - simply choose a constant (0.5?). It is actually derived from the physical properties of the material from which the surface is made.

Specular reflection cos n (s) - This is in fact a fudge which has no basis in physics, (stochastic model) but works to produce reasonable results. By raising cos(s) to the power n, what we do is control how much the reflected ray spreads out as it leaves the surface.

Combining the illumination Combining all three components (diffuse illumination, diffuse reflection from a point source and specular reflection from a point source) gives us the following expression: E = E d + E sd + E ss

Combining the illumination E = R.I d + (R.cos(i) + w(i) + cos n (s)).I s E is the total intensity of light seen reflected from the surface by the observer.

Calculating E E – We’re trying to calculate E, so obviously that is unknown. R - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so its known. I d - The incident diffuse light - we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. – Known cos(i) - we can work this out from L.N w(i) - we can define this to be anything between 0 and 1 - trial and error called for! - Known

Calculating E n - is defined for each surface, so we need to add it as a variable to our surface class and define it when creating the surface, so, basically its known. I s - the intensity of the incident point light source - again we can define this to be anything we like; 0 = darkness, for an 8-bit greyscale 255 = white. See below for a discussion of adding lights to our data model. - Known. cos(s) ? Ah!

Calculating cos s cos s = S.O We know O Need S

Calculating cos s –Thanks to Lambert’s law we know that S is the mirror of the incident ray about the surface normal. It can be found from some vector maths: S = 2 Q - L

Lambert Shading Finally, we know all of the terms in the combined illumination equation and for any surface in our model we can calculate the appropriate shade. E = R id + (R.cos(i) + w(i) + cos n (s)).I s A program which implements this model of shading is said to implement Lambert Shading

Extending to colour Each surface has a colour which means it reflects different colours by different amounts, i.e. it has a different value of R for red, blue and green E red = E dred + E sdred + E ssred E green = E dgreen + E sdgreen + E ssgreen E blue = E dblue + E sdblue + E ssblue

Gouraud Shading and Phong Shading Adopted from U Strathclyde’s graphics course

Problems with Lambert –Using a different colour for each polygon means that the polygons show up very clearly (the appearance of the model is said to be facetted.

Mach bands

–This is a physiological effect whereby the contrast between two areas of a different shade is enhanced as the border between the shades

Smooth Shading What is required is some means of smoothing the sudden transition in colour. Various algorithms exist for this; amongst the best known are: –Gouraud shading and –Phong shading both named after their inventors.

Gouraud Shading –The facetted appearance of a Lambert shaded model is due to each polygon having only a single colour. –To avoid this effect, it is necessary to vary the colour across a polygon:

Gouraud Shading Colour must be calculated for each pixel of the polygon. The method we use to calculate the colour results in the neighbouring pixels across the border between two polygons ending up with approximately the same colours. This blends the shades of the two polygons and avoids the sudden discontinuity at the border.

Gouraud shading –based upon calculating a vertex normal –an artificial construct (a true normal cannot exist for a point such as a vertexl). –can be thought of as the average of the normals of all the polygons that share that vertex

Gouraud shading

–Having found the vertex normals for each vertex of the polygon we want to shade, we can calculate the colour at each vertex using the same formula that we did for Lambert Shading

Calculating the colour of each pixel Interpolating “scan- line algorithm” Light intensity at P given by:

Phong Shading –Phong shading is based on interpolating the surface normal vector – The arrows (and thus the interpolated vectors) give an indication of the curvature of the smooth surface which the flat polygon is approximating to.

Phong Shading The interpolation is (like Gouraud shading) based upon calculating the vertex normals (red arrows)…. …using these as the basis for interpolatation along the polygon edges (blue arrows) …. …..and then using these as the basis for interpolating along a scan line to produce the internal normals (green vectors) a colour value calculated for each pixel based on the interpolated value of the normal vector.

Gouraud vs Phong Phong shading is requires more calculations, but produces better results for specular reflection than Goraud shading in the form of more realistic highlights.

Why? Consider the specular reflection term Cos n s If n is large (the surface is a good smooth reflector) and one vertex has a very small value of s (it is reflecting the light ray in the direction of the observer) whilst the rest of the vertices have large values of s - a highlight occurs somewhere on our polygon.

Why? With Gouraud shading, nowhere on the polygon can have a brighter colour (i.e higher value) than a vertex unless the highlight occurs on or near a vertex, it will be missed out altogether. When it is near a vertex, its effect is spread over the whole polygon. With Phong shading however, an internal point may indeed have a higher value than a vertex. and the highlight will occur tightly focused in the (approximately) correct position

Summary Lambert shading leads to a facetted appearance To get round this, use a smooth shading algorithm Gouraud and Phong shading produce good effects but at the cost of more calculations. Gouraud interpolates the calculated vertex colours Phong interpolates the calculated vertex normals Phong – slower but better highlights

Finer Details Adding fine details –Why we don’t use the brute force approach –Faking it with pictures

Fine details We could explicitly model this block of wood with lots of small, different coloured surfaces. Immense amount of work.

Fine details In fact, we don’t. “Stick” a picture (.bmp,.gif, etc.) onto a surface. This is “texture mapping” Picture is called a “texture map”

Texture Mapping Need to shade image on a pixel by pixel basis

Texture Mapping From shading model we know: E = R id + (R.cos(i) + w(i) + cos n (s)).I s With texture mapping, we have a different R for each pixel The texture map “modulates” R

Texture Mapping Sticking the picture on. Map projection coords to image coords

Texture mapping – scan line Need to lookup colour value (R) at point P But where is P? Consider ratios: AS 1 : AC = as 1 : ac BS 2 : BC = bs 2 : bc S 1 P:S 1 S 2 = s 1 p:s 1 s 2

Texture mapping – scan line S 1 x = (C x - A x ). (s 1x - a x /c x - a x ) S 1 y = (C y - A y ). (s 1y - a y /c y - a y ) and S 2 x = (C x - B x ). (s 2x - b x /c x - b x ) S 2 y = (C y - B y ). (s 2y - b y /c y - b y ) Thus: P x = (S 2x - S 1x ). (s 2x - p x /s 2x - s 1x ) P y = (S 2y - S 1y ). (s 2y - p y /s 2y - s 1y )

Other maps and modulations There are other parameters in our shading equation which we can modulate:

Bump Mapping Surface normal vector Simulate ‘bumps’ or other small texture irregularities By artificially altering the surface normal vector, we alter the direction of the S vector and hence the amount of light reaching the observer.

Bump Mapping

Diffusion Mapping

The end of the line

Summary the need to model fine details to achieve realism that explicitly modelling fine details is difficult and time consuming. fine detail can be simulated using texture mapping and bump mapping texture mapping works by modulating the surface colour term in the shading equation bump mapping works by modulating the surface normal vector in the shading equation.

Hidden Line Removal Adopted from U Strathclyde’s course page

Introduction Depth cueing Surfaces Vectors/normals Hidden face culling Convex/concave solids

Perspective Confusion

Hidden Line Removal (show demo – basic3d wireframe surface)

Hidden Line Removal No one best algorithm Look at a simple approach for convex solids based upon working out which way a surface is pointing relative to the viewer. convex concave

Based on surfaces not lines Need a Surface data structure WireframeSurface –made up of lines

Flat surfaces Key requirement of our surfaces is that they are FLAT. Easiest way to ensure this is by using only three points to define the surface…. – (any triangle MUST be flat - think about it) …but as long as you promise not to do anything that will bend a flat surface, we can allow them to be defined by as many points as you like. Single sided

Which way does a surface point? –Vector mathematics defines the concept of a surface’s normal vector. –A surface’s normal vector is simply an arrow that is perpendicular to that surface (i.e. it sticks straight out)

Determining visibility Consider the six faces of a cube and their normal vectors Vectors N 1 and N 2 are are the normals to surfaces 1 and 2 respectively. Vector L points from surface 1 to the viewpoint. It can be seen that surface 1 is visible to the viewer whilst surface 2 cannot be seen from that position.

Determining visibility Mathematically, a surface is visible from the position given by L if: Where  is the angle between L and N. Equivalently,

Determining visibility Fortunately we can calculate cos  from the direction of L ( l x,l y,l z ) and N ( n x,n y,n z ) This is due to the well known result in vector mathematics - the dot product (or the scalar product) whereby:

Determining visibility Alternatively: Where L and N are unit vectors (i.e of length 1)

How do we work out L.N?

–At this point we know: we need to calculate cos  Values for l x,l y,l z The only things we are missing are n x,n y,n z

Calculating the normal vector –If you multiply any two vectors using the vector product, the result is another vector that is perpendicular to the plane (i.e normal) which contained the two original vectors.

IMPORTANT We need to adopt the convention that the calculated normal vector points away from the observer when the angle between the two initial vectors is measured in an clockwise direction. Failure to do this will lead to MAJOR confusion when you try and implement this Believe me, I know.

Calculating the normal –Where to find two vectors that we can multiply? –Answer: we can manufacture them artificially from the points that define the plane we want the normal of

Calculating the normal By subtracting the coordinates of consecutive points we can form vectors which a guaranteed to lie in the plane of the surface under consideration.

Calculating the normal We define the vectors to be anti-clockwise, when viewing the surface from the interior (imagine the surface is part of a cube and your looking at it from INSIDE the cube). Following the anticlockwise convention mentioned above we have produced what is known as an outward normal.

IMPORTANT An important consequence of this is that when you define the points that define a surface in a program, you MUST add them in anti-clockwise order

Calculating the normal This is a definition of the vector product

Visibility –At this point we know: we need to calculate cos  Values for l x,l y,l z values for n x,n y,n z

Visibility If then draw the surface else don’t! (or draw dashes)

Making it easier Actually, in certain cases we can simplify things a little. If the viewpoint lies somewhere on the negative side of z-axis, as it did when we first set up the projection transformations (i.e without any viewpoint transformations) we can forget about L and cos  All we really need to know is whether or not the normal points into the screen or out of it, i.e. is n z positive or negative? In that case, all we need to do is calculate n z and test it

An alternative approach Consider where the extension of the surface cuts the z-axis

More complex shapes Multiple objects Concave objects

More complex shapes In these cases, each surface must be considered individually. Two different types of approach are possible: –Object space algorithms - examine each face in space to determine it visibility –Image space algorithms - at each screen pixel position, determine which face element is visible. Approximately, the relative efficiency of an image space algorithm increases with the complexity of the scene being represented, but often the drawing can be simplified for convex objects by removing surfaces which are invisible even for a single object.

The z-buffer (or painter’s) algorithm –based upon sorting the surfaces by their z- coordinates. The algorithm can be summarised thus: –Sort the surfaces into order of increasing depth. Define the maximum z value of the surface and the z-extent. –resolve any depth ambiguities –draw all the surfaces starting with the largest z-value

Ambiguities Ambiguities arise when the z-extents of two surfaces overlap.

Ambiguities – front view

Resolving Ambiguities –An algorithm exists for ambiguity resolution –Where two shapes P and Q have overlapping z-extents, perform the following 5 tests (in sequence of increasing complexity). –If any test fails, draw P first.

x - extents overlap?

y -extents overlap?

Is Q not completely on the side of P nearest the viewer?

Is P not completely on the side of Q further from the viewer?

Does the projection of the two surfaces overlap?

If all tests are passed … then reverse P and Q in the list of surfaces sorted by Z max set a flag to say that the test has been perfomed once. The flag is necessary for the case of intersecting planes. If the tests are all passed a second time, then it is necessary to split the surfaces and repeat the algorithm on the 4 surfaces

End up drawing Q2,P1,P2,Q1

Summary Need for depth cues and hidden line removal Using surfaces rather than lines Calculating which way a surface is facing (surface normal) Base a decision on visibility on normal vector and observer vector

References es/52.359if/