Chapters VIII Image Texturing

Slides:



Advertisements
Similar presentations
Graphics Pipeline.
Advertisements

Texture Mapping. Texturing  process that modifies the appearance of each point on a surface using an image or function  any aspect of appearance can.
TEXTURE MAPPING JEFF CHASTINE 1. TEXTURE MAPPING Applying an image (or a texture ) to geometry 2D images (rectangular) 3D images (volumetric – such as.
Real-Time Rendering TEXTURING Lecture 02 Marina Gavrilova.
Texture Mapping. Typical application: mapping images on geometry 3D geometry (quads mesh) + RGB texture 2D (color-map) =
Week 7 - Wednesday.  What did we talk about last time?  Transparency  Gamma correction  Started texturing.
3D Graphics for Game Programming (J. Han) Chapter VIII Image Texturing.
Texture Mapping CPSC /24/03 Abhijeet Ghosh.
OpenGL Texture Mapping
OpenGL Texture Mapping Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
OpenGL Texture Mapping April 16, Angel: Interactive Computer Graphics 3E © Addison-Wesley 2002 Basic Stragegy Three steps to applying a texture.
CS 4731: Computer Graphics Lecture 17: Texturing Emmanuel Agu.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Texture Mapping A way of adding surface details Two ways can achieve the goal:  Surface detail polygons: create extra polygons to model object details.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Texture Mapping. To add surface details… World of Warcraft, Blizzard Inc. More polygons (slow and hard to handle small details) Less polygons but with.
Computer Graphics Texture Mapping Eriq Muhammad Adams
Introduction to Texture Mapping CSE 470/598 Introduction to Computer Graphics Arizona State University Dianne Hansford.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics Texture Mapping
11/11/04© University of Wisconsin, CS559 Fall 2004 Last Time Shading Interpolation Texture mapping –Barycentric coordinates for triangles.
Texture Mapping Course: Computer Graphics Presented by Fan Chen
Computer Graphics Ben-Gurion University of the Negev Fall 2012.
An Interactive Introduction to OpenGL Programming Ed Angel
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
CS 4363/6353 TEXTURE MAPPING PART II. WHAT WE KNOW We can open image files for reading We can load them into texture buffers We can link that texture.
ECSE-4750 Computer Graphics Fall 2004 Prof. Michael Wozny TA. Abhishek Gattani TA. Stephen
Texture Mapping Fall, Textures Describe color variation in interior of 3D polygon  When scan converting a polygon, vary pixel colors according.
3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging.
Texture Mapping. 2 Motivation A typical modern graphics card can handle 10s of millions of polygons a second. How many individual blades of grass are.
OpenGL Texture Mapping. 2 Objectives Introduce the OpenGL texture functions and options.
CS 480/680 Computer Graphics OpenGL Texture Mapping Dr. Frederick C Harris, Jr. Fall 2011.
Texture Mapping Drawing Pictures on Polygons. Texture Mapping.
CS418 Computer Graphics John C. Hart
2 COEN Computer Graphics I Evening’s Goals n Discuss displaying and reading image primitives n Describe texture mapping n Discuss OpenGL modes and.
Texture Mapping. 2 3 Loading Textures void glTexImage2D (GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border,
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1 Graphics CSCI 343, Fall 2015 Lecture 25 Texture Mapping.
CSCI 440.  So far we have learned how to  build shapes  create movement  change views  add simple lights  But, our objects still look very cartoonish.
GAM666 – Introduction To Game Programming ● Textures are simply 2D images which are spread over the triangles of a 3D shape ● Each vertex has texture coordinates.
MP3.. Start at the very beginning. Almost. Either start from nothing yourself, or use the empty template for this MP. Run through the provided files are.
CSc4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University Texture Mapping.
Texture Mapping CEng 477 Introduction to Computer Graphics.
Week 7 - Wednesday CS361.
Texture Mapping Fall, 2016.
OpenGL Texture Mapping
OpenGL Texture Mapping
Introduction to Computer Graphics with WebGL
Interactive Graphics Algorithms Ying Zhu Georgia State University
Introduction to Texture Mapping
Chapter VI OpenGL ES and Shader
Chapter IX Bump Mapping
Chapter V Vertex Processing
(c) University of Wisconsin, CS 559
Mipmaps Lecture 16 Mon, Oct 1, 2007.
3D Game Programming Texture Mapping
Chapter XIV Normal Mapping
Textures Lecture 11 Wed, Oct 5, 2005.
Chapter XVIII Surface Tessellation
Computer Graphics Practical Lesson 6
OpenGL Texture Mapping
Chapter XV Shadow Mapping
OpenGL Texture Mapping
Mipmaps Lecture 13 Wed, Oct 12, 2005.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Texture Mapping Ed Angel Professor Emeritus of Computer Science
3D Game Programming Texture Mapping
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
OpenGL Texture Mapping
Presentation transcript:

Chapters VIII Image Texturing

Where are We? While the vertex shader outputs the clip-space vertices, the rasterizer outputs a set of fragments at the screen space. The per-fragment attributes may include a normal vector, a set of texture coordinates, etc. Using these data, the fragment shader determines the final color of each fragment. Two most important things to do in the fragment shader Lighting Texturing Before moving on to the fragment shader, let’s see the basics of texturing.

Texture Coordinates The simplest among the various texturing methods is image texturing. A texture is typically represented as a 2D array of texels (texture elements). A texel’s location can be described by the coordinates of its center. Image texturing is often described as pasting an image on an object's surface.

Texture Coordinates (cont’d) For texturing, we assign the texture coordinates, (s,t), to each vertex of the polygon mesh “at the modeling step,” as will be presented soon. The rasterizer interpolates them for the fragments. The normalized texture coordinates (s,t) are projected into the texture space.

Texture Coordinates (cont’d) It is customary to normalize the texture coordinates such that s and t range from 0 to 1. The normalized texture coordinates do not depend on a specific texture resolution and can be freely plugged into various textures.

Texture Coordinates (cont’d) Multiple images of different resolutions can be glued to a surface without changing the texture coordinates.

Surface Parameterization The texture coordinates are assigned to the vertices of the polygon mesh. This process is called surface parameterization or simply parameterization. In general, parameterization requires unfolding a 3D surface onto a 2D planar domain.

Chart and Atlas The surface to be textured is often subdivided into a (hopefully small) number of patches. Each patch is unfolded and parameterized. Then, the artist draws an image for each patch. An image for a patch is often called a chart. Multiple charts are usually packed and arranged in a texture, which is often called an atlas.

Texturing in GL Assume that an image texture file is loaded into the GL program to fill texData. Observe that the above is similar to the process of creating buffer objects for the vertex array.

Texturing in GL (cont’d)

Texture Wrapping Mode The texture coordinates (s,t) are not necessarily in the range of [0,1]. The texture wrapping mode handles (s,t) that are outside of the range. Clamp-to-Edge: The out-of-range coordinates are rendered with the edge colors. Repeat: The texture is tiled at every integer junction. Mirrored-Repaet: The texture is mirrored or reflected at every integer junction.

Texture Filtering – Magnification & Minification Consider a quad. For each fragment located at (x,y) in the screen, its texture coordinates (s,t) are projected onto (s',t') in the texture space. Note that (s',t') are floating-point values in almost all cases. Consequently, texels around (s',t') are sampled. This sampling process is called texture filtering. The screen-space quad may appear larger than the image texture, and therefore the texture is magnified so as to fit to the quad. There are more pixels than texels.

Texture Filtering – Magnification & Minification (cont’d) In contrast, the screen-space quad may appear smaller than the image texture, and the texture is minified. Sparsely projected onto the texture space, the pixels take large jumps in the space. Summary Magnification: more pixels than texels Minification: less pixels than texels

Filtering for Magnification Option 1: Nearest point sampling A block of pixels can be mapped to a single texel. Consequently, adjacent pixel blocks can change abruptly from one texel to the next texel, and a blocky image is often produced.

Filtering for Magnification (cont’d) Option 2: Bilinear interpolation It is preferred to nearest point sampling not only because the final result suffers much less from the blocky image problem but also because the graphics hardware is usually optimized for bilinear interpolation.

Filtering for Minification Consider a checker-board image texture. If all pixels are surrounded by dark-gray texels, the textured quad appears dark gray. If all pixels are surrounded by light-gray texels, the textured quad appears light gray.

Mipmap The aliasing problem observed in minification is caused by the fact that we have more texels than pixels. The pixels take large jumps in the texture space, leaving many texels not involved in texture ltering. Then, a simple solution is to reduce the number of texels such that the texel count becomes as close as possible to the pixel count. For decreasing the texture size, down-sampling is adopted. Given an original texture of 2lx2l resolution, a pyramid of (l + 1) levels is constructed, where the original texture is located at level 0. The pyramid is called a mipmap. The level to filter is named level of detail, and is denoted by λ. We will look for the level whose texel count is close to the pixel count.

Mipmapping (cont’d) A pixel covers an area on the screen. For simplicity, take the area as square. Then, a pixel’s projection onto the texture space is not a point but an area centered at at (s',t'). The projected area is called the footprint of the pixel. In this contrived example, the quad and level-1 texture have the same size. A pixel’s footprint covers 2x2 texels in level 0, but covers a single texel in level 1. When a pixel footprint covers mxm texels of level-0 texture, λ is set to log2m.

Mipmapping (cont’d) In the example, the screen-space quad and level-2 texture have the same size. A pixel's footprint covers exactly a single texel in level-2 texture, which is then filtered. In this and previous examples, observe that all texels are involved in filtering.

Mipmapping (cont’d) In the following example, λ = log23 = 1.585.., and so we see levels 1 and 2, more formally λ and λ .

Mipmapping (cont’d) Between λ and λ , which one to choose? We can take the nearest level, λ+0.5 At the level, we can take the nearest point. In contrast, we can adopt bilinear interpolation. We can take both of them and linearly interpolate the filtering results. At each level, we can take the nearest point. In contrast, if bilinear interpolation is performed at each level, this is called trilinear interpolation.

Texture Filtering in GL GL provides void glGenerateMipmap(GLenum target) where target can be GL_TEXTURE_2D. Given a screen-space pixel, GL itself computes its footprint to determine whether the texture is magnified or minified. In either case, the filtering method has yet to be specified by invoking glTexParameteri. glTexParameteri(GLenum target, GLenum pname, GLint param) target is either GL_TEXTURE_2D or GL_TEXTURE_CUBE_MAP pname can be the following: GL_TEXTURE_MAG_FILTER GL_TEXTURE_MIN_FILTER GL_TEXTURE_WRAP_S GL_TEXTURE_WRAP_T param If pname is GL_TEXTURE_MAG_FILTER, param is either GL_NEAREST or GL_LINEAR (for bilinear interpolation)

Texture Filtering in GL (cont’d) glTexParameteri(GLenum target, GLenum pname, GLint param) param If pname is GL_TEXTURE_MIN_FILTER, param can be: GL_NEAREST GL_LINEAR GL_NEAREST_MIPMAP_NEAREST GL_NEAREST_MIPMAP_LINEAR GL_LINEAR_MIPMAP_NEAREST GL_LINEAR_MIPMAP_LINEAR When we do not use the mipmap, the level-0 texture is filtered either by nearest point sampling or by bilinear interpolation. GL_LINEAR_MIPMAP_LINEAR corresponds to the trilinear interpolation. no mipmapping which level(s) to choose? how to filter each level?

Vertex Shader and Rasterizer (revisited) Our first vertex shader The per-vertex texture coordinates are interpolated for each fragment and then passed to the fragment shader.

Vertex and Fragment Shaders The inputs to the fragment shader Inputs: The per-vertex output variables produced by the vertex shader are interpolated to determine the per-fragment ones. Uniforms to be used by the fragment shader. Samplers = Textures. The output: one or more fragment colors that are passed to the output merger.

Fragment Shader The fragment shader runs once for each fragment and each fragment is processed independently of the others. The built-in function, texture, filters the sampler with the filtering method set up by glTexParameteri.

Texturing Examples texturing only texturing + lighting (CH9)