GLSL Applications: 1 of 2 Joseph Kider Source: Patrick Cozzi – Spring 2011 University of Pennsylvania CIS 565 - Fall 2011.

Slides:



Advertisements
Similar presentations
POST-PROCESSING SET09115 Intro Graphics Programming.
Advertisements

CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1/16 Deferred Lighting Deferred Lighting – 11/18/2014.
Understanding the graphics pipeline Lecture 2 Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider.
The Graphics Pipeline Patrick Cozzi University of Pennsylvania CIS Fall 2012.
TEXTURE MAPPING JEFF CHASTINE 1. TEXTURE MAPPING Applying an image (or a texture ) to geometry 2D images (rectangular) 3D images (volumetric – such as.
Introduction to Geometry Shaders Patrick Cozzi Analytical Graphics, Inc.
Control Flow Virtualization for General-Purpose Computation on Graphics Hardware Ghulam Lashari Ondrej Lhotak University of Waterloo.
Announcments Sign up for crits!. Reading for Next Week FvD – local lighting models GL 5 – lighting GL 9 (skim) – texture mapping.
GLSL I May 28, 2007 (Adapted from Ed Angel’s lecture slides)
GLSL I Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts Director, Arts Technology Center University of New Mexico.
Introduction to Geometry Shaders Patrick Cozzi Analytical Graphics, Inc.
GLSL Applications: 2 of 2 Patrick Cozzi University of Pennsylvania CIS Spring 2011.
Status – Week 277 Victor Moya.
Evolution of the Programmable Graphics Pipeline Patrick Cozzi University of Pennsylvania CIS Spring 2011.
Mohan Sridharan Based on slides created by Edward Angel GLSL I 1 CS4395: Computer Graphics.
GEOMETRY SHADER. Breakdown  Basics  Review – graphics pipeline and shaders  What can the geometry shader do?  Working with the geometry shader  GLSL.
Image Processing.  a typical image is simply a 2D array of color or gray values  i.e., a texture  image processing takes as input an image and outputs.
Michael Robertson Yuta Takayama. WebGL is OpenGL on a web browser. OpenGL is a low-level 3D graphics API Basically, WebGL is a 3D graphics API that generates.
Queensland University of Technology CRICOS No J INB382/INN382 Real-Time Rendering Techniques Lecture 13: Revision Ross Brown.
REAL-TIME VOLUME GRAPHICS Christof Rezk Salama Computer Graphics and Multimedia Group, University of Siegen, Germany Eurographics 2006 Real-Time Volume.
GPU Programming Robert Hero Quick Overview (The Old Way) Graphics cards process Triangles Graphics cards process Triangles Quads.
CSE 381 – Advanced Game Programming Basic 3D Graphics
OpenGL Shading Language (Advanced Computer Graphics) Ernest Tatum.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
CSE 690: GPGPU Lecture 6: Cg Tutorial Klaus Mueller Computer Science, Stony Brook University.
1 Graphics CSCI 343, Fall 2015 Lecture 4 More on WebGL.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
Shadow Mapping Chun-Fa Chang National Taiwan Normal University.
GRAPHICS PIPELINE & SHADERS SET09115 Intro to Graphics Programming.
CS662 Computer Graphics Game Technologies Jim X. Chen, Ph.D. Computer Science Department George Mason University.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
OpenGL-ES 3.0 And Beyond Boston Photo credit :Johnson Cameraface OpenGL Basics.
Mobile Graphics Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Week 3 Lecture 4: Part 2: GLSL I Based on Interactive Computer Graphics (Angel) - Chapter 9.
1 Graphics CSCI 343, Fall 2015 Lecture 25 Texture Mapping.
Ray Tracing using Programmable Graphics Hardware
What are shaders? In the field of computer graphics, a shader is a computer program that runs on the graphics processing unit(GPU) and is used to do shading.
Background image by chromosphere.deviantart.com Fella in following slides by devart.deviantart.com DM2336 Programming hardware shaders Dioselin Gonzalez.
Mapping Computational Concepts to GPUs Mark Harris NVIDIA.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
OpenGL Shading Language
GPGPU: Parallel Reduction and Scan Joseph Kider University of Pennsylvania CIS Fall 2011 Credit: Patrick Cozzi, Mark Harris Suresh Venkatensuramenan.
Programming with OpenGL Part 3: Shaders Ed Angel Professor of Emeritus of Computer Science University of New Mexico 1 E. Angel and D. Shreiner: Interactive.
GLSL I.  Fixed vs. Programmable  HW fixed function pipeline ▪ Faster ▪ Limited  New programmable hardware ▪ Many effects become possible. ▪ Global.
Advanced Texture Mapping Bump Mapping & Environment Mapping (Reflection)
An Introduction to the Cg Shading Language Marco Leon Brandeis University Computer Science Department.
GLSL Review Monday, Nov OpenGL pipeline Command Stream Vertex Processing Geometry processing Rasterization Fragment processing Fragment Ops/Blending.
COMP 175 | COMPUTER GRAPHICS Remco Chang1/XX13 – GLSL Lecture 13: OpenGL Shading Language (GLSL) COMP 175: Computer Graphics April 12, 2016.
Shaders, part 2 alexandri zavodny.
Shader.
Tips for Shading Your Terrain
Deferred Lighting.
Introduction to Computer Graphics with WebGL
GLSL I Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts Director, Arts Technology Center University of New Mexico.
Introduction to Computer Graphics with WebGL
Introduction to Computer Graphics with WebGL
Day 05 Shader Basics.
Geb Thomas Adapted from the OpenGL Programming Guide
Chapter VI OpenGL ES and Shader
GPGPU: Parallel Reduction and Scan
Introduction to Computer Graphics with WebGL
Programming with OpenGL Part 3: Shaders
Hw03 : shader.
CIS 441/541: Introduction to Computer Graphics Lecture 15: shaders
Image Filtering with GLSL
CS 480/680 Computer Graphics GLSL Overview.
Frame Buffers Fall 2018 CS480/680.
CS 480/680 Fall 2011 Dr. Frederick C Harris, Jr. Computer Graphics
Presentation transcript:

GLSL Applications: 1 of 2 Joseph Kider Source: Patrick Cozzi – Spring 2011 University of Pennsylvania CIS Fall 2011

Agenda GLSL Applications  Per-Fragment Lighting  Image Processing Finish last week’s slides  OpenGL Drawing  OpenGL Multithreading

Per-Fragment Lighting Slide from

Per-Fragment Lighting Images from Per-Vertex LightingPer-Fragment Lighting

Per-Fragment Lighting: Diffuse Slide from

Per-Fragment Lighting: Diffuse Slide from

Per-Fragment Lighting: Diffuse uniform vec3 u_Color; in vec3 fs_Incident; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse * u_Color, 1.0f); }

Per-Fragment Lighting: Diffuse uniform vec3 u_Color; in vec3 fs_Incident; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse * u_Color, 1.0f); } in vectors are not normalized. Why? Good practice: don’t write to in variables

Per-Fragment Lighting: Diffuse uniform vec3 u_Color; in vec3 fs_Incident; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse * u_Color, 1.0f); } Know the graph: Why max ? Graph from

Per-Fragment Lighting: Specular Slide from

Per-Fragment Lighting: Specular Slide from

Per-Fragment Lighting: Specular uniform vec3 u_Color; uniform vec3 u_SpecColor; uniform float u_SpecHardness; in vec3 fs_Incident; in vec3 fs_Viewer; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); vec3 H = normalize(-incident + fs_Viewer); float specular = pow(max(0.0f, dot(H, normal)), u_SpecHardness); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse*u_Color + specular*u_SpecColor,1.0f); }

Per-Fragment Lighting: Specular uniform vec3 u_Color; uniform vec3 u_SpecColor; uniform float u_SpecHardness; in vec3 fs_Incident; in vec3 fs_Viewer; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); vec3 H = normalize(-incident + fs_Viewer); float specular = pow(max(0.0f, dot(H, normal)), u_SpecHardness); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse*u_Color + specular*u_SpecColor,1.0f); } “half” vector

Per-Fragment Lighting: Specular uniform vec3 u_Color; uniform vec3 u_SpecColor; uniform float u_SpecHardness; in vec3 fs_Incident; in vec3 fs_Viewer; in vec3 fs_Normal; in vec3 fs_Texcoord; out vec3 out_Color; void main(void) { vec3 incident = normalize(fs_Incident); vec3 normal = normalize(fs_Normal); vec3 H = normalize(-incident + fs_Viewer); float specular = pow(max(0.0f, dot(H, normal)), u_SpecHardness); float diffuse = max(0.0f, dot(-incident, normal)); out_Color = vec4(diffuse*u_Color + specular*u_SpecColor,1.0f); } Blinn-Phong shading

Per-Fragment Lighting: Specular Slide from

Image Processing Our first look at GPGPU  General-Purpose computation on Graphics Processing Units Input: Image Output: Processed image A kernel runs on each pixel

Image Processing Examples Images from Image Negative

Image Processing Examples Images from Edge Detection

Image Processing Examples Images from Toon Rendering

Image Processing Questions Is the GPU a good fit for image processing? Is image processing data-parallel? What about bus traffic? What type of shader should implement an image processing kernel?

Image Processing: GPU Setup Input:  Texture  Viewport-aligned quad, a.k.a full-screen quad Output: framebuffer  …for now Kernel: fragment shader

Image Processing: GPU Setup 1. Render viewport-aligned quad Fragment shader 2. A fragment is invoked for each screen pixel 3. Each fragment shader can access any part of the image stored as a texel: gather 4. Each fragment shader executes the kernel and writes the color to the framebuffer Images from

Image Processing: GPU Setup 1. Render viewport-aligned quad Fragment shader 2. A fragment is invoked for each screen pixel 3. Each fragment shader can access any part of the image stored as a texel: gather 4. Each fragment shader executes the kernel and writes the color to the framebuffer Images from

Image Processing: GPU Setup 1. Render viewport-aligned quad Fragment shader 2. A fragment is invoked for each screen pixel 3. Each fragment shader can access any part of the image stored as a texel: gather 4. Each fragment shader executes the kernel and writes the color to the framebuffer Images from

Image Processing: GPU Setup 1. Render viewport-aligned quad Fragment shader 2. A fragment is invoked for each screen pixel 3. Each fragment shader can access any part of the image stored as a texel: gather 4. Each fragment shader executes the kernel and writes the color to the framebuffer Images from

Image Processing: GPU Setup 1. Render viewport-aligned quad Fragment shader 2. A fragment is invoked for each screen pixel 3. Each fragment shader can access any part of the image stored as a texel: gather 4. Each fragment shader executes the kernel and writes the color to the framebuffer Images from

Image Processing: GPU Setup How do we model the viewport-aligned quad? Two triangles? One big triangle? Screen

Image Processing: GPU Setup Which has more vertex shader overhead?  Does it matter? Which is simpler to implement? Which has less fragment shader overhead?

Image Processing: GPU Setup Triangle edges are redundantly shaded Fragments are processed in 2x2 blocks  Why?

Image Processing: GPU Setup Triangle edges are redundantly shaded Image and Chart from Number of vertices FPS FanStripMax area

Image Processing: GPU Setup The viewport has the same width and height as the image to be processed  The texture also has the same dimensions  How does the fragment shader access texels?

Image Processing: GPU Setup Store texture coordinates per vertex in vec3 Position; in vec2 Texcoords; out vec2 fs_Texcoords; void main(void) { fs_Texcoords = Texcoords; gl_Position = vec4(Position, 1.0); } Vertex Shader uniform sampler2D u_Image; in vec2 fs_Texcoords; out vec4 out_Color; void main(void) { out_Color = texture(u_Image, fs_Texcoords); } Fragment Shader

Image Processing: GPU Setup Store texture coordinates per vertex  What memory costs does this incur? Does it matter?  What bandwidth costs does this incur?  What non-obvious optimization does it allow?

Image Processing: GPU Setup Compute texture coordinate in fragment shader in vec3 Position; void main(void) { gl_Position = vec4(Position, 1.0); } Vertex Shader uniform sampler2D u_Image; uniform vec2 u_inverseViewportDimensions; out vec4 out_Color; void main(void) { vec2 txCoord = u_inverseViewportDimensions * gl_FragCoord.xy; out_Color = texture(u_Image, txCoord); } Fragment Shader

Image Processing: GPU Setup Compute texture coordinate in fragment shader in vec3 Position; void main(void) { gl_Position = vec4(Position, 1.0); } Vertex Shader uniform sampler2D u_Image; uniform vec2 u_inverseViewportDimensions; out vec4 out_Color; void main(void) { vec2 txCoord = u_inverseViewportDimensions * gl_FragCoord.xy; out_Color = texture(u_Image, txCoord); } Fragment Shader What is u_inverseViewportDimensions ?

Image Processing: GPU Setup How do you access adjacent texels? uniform sampler2D u_Image; in vec2 fs_Texcoords; out vec4 out_Color; void main(void) { vec4 c0 = texture(u_Image, fs_Texcoords); vec4 c1 = textureOffset(u_Image, fs_Texcoords, ivec2(-1, 0)); vec4 c2 = textureOffset(u_Image, fs_Texcoords, ivec2( 1, 0)); vec4 c3 = textureOffset(u_Image, fs_Texcoords, ivec2( 0, -1)); vec4 c4 = textureOffset(u_Image, fs_Texcoords, ivec2( 0, 1)); out_Color = (c0 + c1 + c2 + c3 + c4) * 0.2; }

Image Processing: GPU Setup How do you access adjacent texels? uniform sampler2D u_Image; in vec2 fs_Texcoords; out vec4 out_Color; void main(void) { vec4 c0 = texture(u_Image, fs_Texcoords); vec4 c1 = textureOffset(u_Image, fs_Texcoords, ivec2(-1, 0)); vec4 c2 = textureOffset(u_Image, fs_Texcoords, ivec2( 1, 0)); vec4 c3 = textureOffset(u_Image, fs_Texcoords, ivec2( 0, -1)); vec4 c4 = textureOffset(u_Image, fs_Texcoords, ivec2( 0, 1)); out_Color = (c0 + c1 + c2 + c3 + c4) * 0.2; } (-1, 0)(1, 0) (0, -1) (0, 1)

Image Processing: GPU Setup textureOffset requires constants  E.g. ivec2(x, y) is not allowed How else do you access adjacent texels?

Image Processing: GPU Setup How else do you access adjacent texels? uniform sampler2D u_Image; uniform vec2 u_inverseViewportDimensions; out vec4 out_Color; void main(void) { vec2 txCoord = u_inverseViewportDimensions * gl_FragCoord.xy; vec2 delta = 1.0 / textureSize(u_Image); vec4 c0 = texture(u_Image, txCoord); vec4 c1 = texture(u_Image, txCoord + (delta * vec2(-1.0, 0.0))); vec4 c2 = texture(u_Image, txCoord + (delta * vec2( 1.0, 0.0))); vec4 c3 = texture(u_Image, txCoord + (delta * vec2( 0.0, -1.0))); vec4 c4 = texture(u_Image, txCoord + (delta * vec2( 0.0, 1.0))); out_Color = (c0 + c1 + c2 + c3 + c4) * 0.2; }

Image Processing: GPU Setup How else do you access adjacent texels? uniform sampler2D u_Image; uniform vec2 u_inverseViewportDimensions; out vec4 out_Color; void main(void) { vec2 txCoord = u_inverseViewportDimensions * gl_FragCoord.xy; vec2 delta = 1.0 / textureSize(u_Image); vec4 c0 = texture(u_Image, txCoord); vec4 c1 = texture(u_Image, txCoord + (delta * vec2(-1.0, 0.0))); vec4 c2 = texture(u_Image, txCoord + (delta * vec2( 1.0, 0.0))); vec4 c3 = texture(u_Image, txCoord + (delta * vec2( 0.0, -1.0))); vec4 c4 = texture(u_Image, txCoord + (delta * vec2( 0.0, 1.0))); out_Color = (c0 + c1 + c2 + c3 + c4) * 0.2; }

Image Processing: GPU Setup How else do you access adjacent texels? uniform sampler2D u_Image; uniform vec2 u_inverseViewportDimensions; out vec4 out_Color; void main(void) { vec2 txCoord = u_inverseViewportDimensions * gl_FragCoord.xy; vec2 delta = 1.0 / textureSize(u_Image); vec4 c0 = texture(u_Image, txCoord); vec4 c1 = texture(u_Image, txCoord + (delta * vec2(-1.0, 0.0))); vec4 c2 = texture(u_Image, txCoord + (delta * vec2( 1.0, 0.0))); vec4 c3 = texture(u_Image, txCoord + (delta * vec2( 0.0, -1.0))); vec4 c4 = texture(u_Image, txCoord + (delta * vec2( 0.0, 1.0))); out_Color = (c0 + c1 + c2 + c3 + c4) * 0.2; } (-1, 0)(1, 0) (0, -1) (0, 1)

Image Processing: Kernel Examples Image Negative uniform sampler2D u_Image; in vec2 fs_Texcoords; out vec4 out_Color; void main(void) { out_Color = vec4(1.0) - texture(u_Image, fs_Texcoords); } Images from

Image Processing: Kernel Examples Image from 2D Gaussian Gaussian Blur

Filter for 3x3 Gaussian Blur: [1 2 1] 1/16 * [2 4 2] [1 2 1] The elements add to one Other filters are also used for Edge detection Sharpen Emboss … Image Processing: Kernel Examples

Gaussian Blur  How would you implement the fragment shader?  How is the memory coherence? 3x3, 5x5, etc.

Image Processing: Kernel Examples Image from

Image Processing: Kernel Examples What does this filter do? [1 1 1] 1/9 * [1 1 1] [1 1 1]

Image Processing: Read backs How do we get the contents of the framebuffer into system memory?  Print Screen? It doesn’t matter if we are using: Efficient read backs are important

Image Processing: Read backs glReadPixels glUseProgram(/*... */); glDraw*(/*... */); unsigned char rgb = new unsigned char[width * height * 3]; glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, rgb); //... delete [] rgb;

Image Processing: Read backs glReadPixels glUseProgram(/*... */); glDraw*(/*... */); unsigned char rgb = new unsigned char[width * height * 3]; glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, rgb); //... delete [] rgb; Use GPU for image processing

Image Processing: Read backs glReadPixels glUseProgram(/*... */); glDraw*(/*... */); unsigned char rgb = new unsigned char[width * height * 3]; glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, rgb); //... delete [] rgb; Allocate buffer for processed image

Image Processing: Read backs glReadPixels glUseProgram(/*... */); glDraw*(/*... */); unsigned char rgb = new unsigned char[width * height * 3]; glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, rgb); //... delete [] rgb; Ask for framebuffer’s color buffer

Image Processing: Read backs glReadPixels glUseProgram(/*... */); glDraw*(/*... */); unsigned char rgb = new unsigned char[width * height * 3]; glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, rgb); //... delete [] rgb; You guys are sharp

Image Processing: Read backs What is the major problem with glReadPixels ?

Image Processing: Use Cases Photoshop-type applications Post-processing in games On the fly video manipulation Augmented reality These last three don’t even need read backs