Mapping Computational Concepts to GPUs Mark Harris NVIDIA.

Slides:



Advertisements
Similar presentations
Is There a Real Difference between DSPs and GPUs?
Advertisements

COMPUTER GRAPHICS CS 482 – FALL 2014 NOVEMBER 10, 2014 GRAPHICS HARDWARE GRAPHICS PROCESSING UNITS PARALLELISM.
Photon Mapping on Programmable Graphics Hardware Timothy J. Purcell Mike Cammarano Pat Hanrahan Stanford University Craig Donner Henrik Wann Jensen University.
Understanding the graphics pipeline Lecture 2 Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider.
Graphics Hardware CMSC 435/634. Transform Shade Clip Project Rasterize Texture Z-buffer Interpolate Vertex Fragment Triangle A Graphics Pipeline.
Prepared 5/24/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
The Programmable Graphics Hardware Pipeline Doug James Asst. Professor CS & Robotics.
GPGPU Lessons Learned Mark Harris. General-Purpose Computation on GPUs Highly parallel applications Physically-based simulation image processing scientific.
Rasterization and Ray Tracing in Real-Time Applications (Games) Andrew Graff.
Control Flow Virtualization for General-Purpose Computation on Graphics Hardware Ghulam Lashari Ondrej Lhotak University of Waterloo.
Modified from: A Survey of General-Purpose Computation on Graphics Hardware John Owens University of California, Davis David Luebke University of Virginia.
A Crash Course on Programmable Graphics Hardware Li-Yi Wei 2005 at Tsinghua University, Beijing.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Chapter.
3D Graphics Processor Architecture Victor Moya. PhD Project Research on architecture improvements for future Graphic Processor Units (GPUs). Research.
GPUGI: Global Illumination Effects on the GPU
Data Parallel Computing on Graphics Hardware Ian Buck Stanford University.
The programmable pipeline Lecture 10 Slide Courtesy to Dr. Suresh Venkatasubramanian.
Mapping Computational Concepts to GPU’s Jesper Mosegaard Based primarily on SIGGRAPH 2004 GPGPU COURSE and Visualization 2004 Course.
GPGPU CS 446: Real-Time Rendering & Game Technology David Luebke University of Virginia.
GPU Tutorial 이윤진 Computer Game 2007 가을 2007 년 11 월 다섯째 주, 12 월 첫째 주.
GPU Graphics Processing Unit. Graphics Pipeline Scene Transformations Lighting & Shading ViewingTransformations Rasterization GPUs evolved as hardware.
GPGPU overview. Graphics Processing Unit (GPU) GPU is the chip in computer video cards, PS3, Xbox, etc – Designed to realize the 3D graphics pipeline.
General-Purpose Computation on Graphics Hardware.
Ray Tracing and Photon Mapping on GPUs Tim PurcellStanford / NVIDIA.
REAL-TIME VOLUME GRAPHICS Christof Rezk Salama Computer Graphics and Multimedia Group, University of Siegen, Germany Eurographics 2006 Real-Time Volume.
Enhancing GPU for Scientific Computing Some thoughts.
May 8, 2007Farid Harhad and Alaa Shams CS7080 Over View of the GPU Architecture CS7080 Class Project Supervised by: Dr. Elias Khalaf By: Farid Harhad &
Mapping Computational Concepts to GPUs Mark Harris NVIDIA Developer Technology.
Mapping Computational Concepts to GPUs Mark Harris NVIDIA.
GPGPU: General-Purpose Computation on GPUs Mark Harris NVIDIA Corporation.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
Interactive Time-Dependent Tone Mapping Using Programmable Graphics Hardware Nolan GoodnightGreg HumphreysCliff WoolleyRui Wang University of Virginia.
Cg Programming Mapping Computational Concepts to GPUs.
1 SIC / CoC / Georgia Tech MAGIC Lab Rossignac GPU  Precision, Power, Programmability –CPU: x60/decade, 6 GFLOPS,
General-Purpose Computation on Graphics Hardware Adapted from: David Luebke (University of Virginia) and NVIDIA.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1 Dr. Scott Schaefer Programmable Shaders. 2/30 Graphics Cards Performance Nvidia Geforce 6800 GTX 1  6.4 billion pixels/sec Nvidia Geforce 7900 GTX.
The programmable pipeline Lecture 3.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
CSE 690: GPGPU Lecture 7: Matrix Multiplications Klaus Mueller Computer Science, Stony Brook University.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
Tone Mapping on GPUs Cliff Woolley University of Virginia Slides courtesy Nolan Goodnight.
CS662 Computer Graphics Game Technologies Jim X. Chen, Ph.D. Computer Science Department George Mason University.
GPU Computation Strategies & Tricks Ian Buck NVIDIA.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
May 8, 2007Farid Harhad and Alaa Shams CS7080 Overview of the GPU Architecture CS7080 Final Class Project Supervised by: Dr. Elias Khalaf By: Farid Harhad.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Havok FX Physics on NVIDIA GPUs. Copyright © NVIDIA Corporation 2004 What is Effects Physics? Physics-based effects on a massive scale 10,000s of objects.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
David Angulo Rubio FAMU CIS GradStudent. Introduction  GPU(Graphics Processing Unit) on video cards has evolved during the last years. They have become.
From Turing Machine to Global Illumination Chun-Fa Chang National Taiwan Normal University.
Ray Tracing using Programmable Graphics Hardware
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
GPGPU: Parallel Reduction and Scan Joseph Kider University of Pennsylvania CIS Fall 2011 Credit: Patrick Cozzi, Mark Harris Suresh Venkatensuramenan.
Ray Tracing by GPU Ming Ouhyoung. Outline Introduction Graphics Hardware Streaming Ray Tracing Discussion.
GPGPU introduction. Why is GPU in the picture Seeking exa-scale computing platform Minimize power per operation. – Power is directly correlated to the.
GLSL I.  Fixed vs. Programmable  HW fixed function pipeline ▪ Faster ▪ Limited  New programmable hardware ▪ Many effects become possible. ▪ Global.
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Models and Architectures 靜宜大學 資訊工程系 蔡奇偉 副教授 2012.
COMP 175 | COMPUTER GRAPHICS Remco Chang1/XX13 – GLSL Lecture 13: OpenGL Shading Language (GLSL) COMP 175: Computer Graphics April 12, 2016.
Build your own 2D Game Engine and Create Great Web Games using HTML5, JavaScript, and WebGL. Sung, Pavleas, Arnez, and Pace, Chapter 5 Examples 1.
GPU Architecture and Its Application
Graphics Processing Unit
Chapter 6 GPU, Shaders, and Shading Languages
From Turing Machine to Global Illumination
Graphics Processing Unit
GPGPU: Parallel Reduction and Scan
Ray Tracing on Programmable Graphics Hardware
CIS 6930: Chip Multiprocessor: GPU Architecture and Programming
Presentation transcript:

Mapping Computational Concepts to GPUs Mark Harris NVIDIA

2 Outline Data Parallelism and Stream Processing Data Parallelism and Stream Processing Computational Resources Inventory Computational Resources Inventory CPU-GPU Analogies CPU-GPU Analogies Example: Example: N-body gravitational simulation N-body gravitational simulation Parallel reductions Parallel reductions Overview of Branching Techniques Overview of Branching Techniques

3 The Importance of Data Parallelism GPUs are designed for graphics GPUs are designed for graphics Highly parallel tasks Highly parallel tasks GPUs process independent vertices & fragments GPUs process independent vertices & fragments Temporary registers are zeroed Temporary registers are zeroed No shared or static data No shared or static data No read-modify-write buffers No read-modify-write buffers Data-parallel processing Data-parallel processing GPUs architecture is ALU-heavy GPUs architecture is ALU-heavy Multiple vertex & pixel pipelines, multiple ALUs per pipe Multiple vertex & pixel pipelines, multiple ALUs per pipe Hide memory latency (with more computation) Hide memory latency (with more computation)

4 Arithmetic Intensity Arithmetic intensity Arithmetic intensity ops per word transferred ops per word transferred Computation / bandwidth Computation / bandwidth Best to have high arithmetic intensity Best to have high arithmetic intensity Ideal GPGPU apps have Ideal GPGPU apps have Large data sets Large data sets High parallelism High parallelism Minimal dependencies between data elements Minimal dependencies between data elements

5 Data Streams & Kernels Streams Streams Collection of records requiring similar computation Collection of records requiring similar computation Vertex positions, Voxels, FEM cells, etc. Vertex positions, Voxels, FEM cells, etc. Provide data parallelism Provide data parallelism Kernels Kernels Functions applied to each element in stream Functions applied to each element in stream transforms, PDE, … transforms, PDE, … Few dependencies between stream elements Few dependencies between stream elements Encourage high Arithmetic Intensity Encourage high Arithmetic Intensity

6 Example: Simulation Grid Common GPGPU computation style Common GPGPU computation style Textures represent computational grids = streams Textures represent computational grids = streams Many computations map to grids Many computations map to grids Matrix algebra Matrix algebra Image & Volume processing Image & Volume processing Physically-based simulation Physically-based simulation Global Illumination Global Illumination ray tracing, photon mapping, radiosity ray tracing, photon mapping, radiosity Non-grid streams can be mapped to grids Non-grid streams can be mapped to grids

7 Stream Computation Grid Simulation algorithm Grid Simulation algorithm Made up of steps Made up of steps Each step updates entire grid Each step updates entire grid Must complete before next step can begin Must complete before next step can begin Grid is a stream, steps are kernels Grid is a stream, steps are kernels Kernel applied to each stream element Kernel applied to each stream element Cloud simulation algorithm

8 Scatter vs. Gather Grid communication Grid communication Grid cells share information Grid cells share information

9 Computational Resources Inventory Programmable parallel processors Programmable parallel processors Vertex & Fragment pipelines Vertex & Fragment pipelines Rasterizer Rasterizer Mostly useful for interpolating addresses (texture coordinates) and per-vertex constants Mostly useful for interpolating addresses (texture coordinates) and per-vertex constants Texture unit Texture unit Read-only memory interface Read-only memory interface Render to texture Render to texture Write-only memory interface Write-only memory interface

10 Vertex Processor Fully programmable (SIMD / MIMD) Fully programmable (SIMD / MIMD) Processes 4-vectors (RGBA / XYZW) Processes 4-vectors (RGBA / XYZW) Capable of scatter but not gather Capable of scatter but not gather Can change the location of current vertex Can change the location of current vertex Cannot read info from other vertices Cannot read info from other vertices Can only read a small constant memory Can only read a small constant memory Latest GPUs: Vertex Texture Fetch Latest GPUs: Vertex Texture Fetch Random access memory for vertices Random access memory for vertices  Gather (But not from the vertex stream itself)  Gather (But not from the vertex stream itself)

11 Fragment Processor Fully programmable (SIMD) Fully programmable (SIMD) Processes 4-component vectors (RGBA / XYZW) Processes 4-component vectors (RGBA / XYZW) Random access memory read (textures) Random access memory read (textures) Capable of gather but not scatter Capable of gather but not scatter RAM read (texture fetch), but no RAM write RAM read (texture fetch), but no RAM write Output address fixed to a specific pixel Output address fixed to a specific pixel Typically more useful than vertex processor Typically more useful than vertex processor More fragment pipelines than vertex pipelines More fragment pipelines than vertex pipelines Direct output (fragment processor is at end of pipeline) Direct output (fragment processor is at end of pipeline)

12 CPU-GPU Analogies CPU programming is familiar CPU programming is familiar GPU programming is graphics-centric GPU programming is graphics-centric Analogies can aid understanding Analogies can aid understanding

13 CPU-GPU Analogies CPU GPU CPU GPU Stream / Data Array = Texture Memory Read = Texture Sample

14 Kernels Kernel / loop body / algorithm step = Fragment Program CPUGPU

15 Feedback Each algorithm step depends on the results of previous steps Each algorithm step depends on the results of previous steps Each time step depends on the results of the previous time step Each time step depends on the results of the previous time step

16 Feedback.... Grid[i][j]= x;... Array Write = Render to Texture CPU GPU

17 GPU Simulation Overview Analogies lead to implementation Analogies lead to implementation Algorithm steps are fragment programs Algorithm steps are fragment programs Computational kernels Computational kernels Current state is stored in textures Current state is stored in textures Feedback via render to texture Feedback via render to texture One question: how do we invoke computation? One question: how do we invoke computation?

18 Invoking Computation Must invoke computation at each pixel Must invoke computation at each pixel Just draw geometry! Just draw geometry! Most common GPGPU invocation is a full-screen quad Most common GPGPU invocation is a full-screen quad Other Useful Analogies Other Useful Analogies Rasterization = Kernel Invocation Rasterization = Kernel Invocation Texture Coordinates = Computational Domain Texture Coordinates = Computational Domain Vertex Coordinates = Computational Range Vertex Coordinates = Computational Range

19 Typical “Grid” Computation Initialize “view” (so that pixels:texels::1:1) Initialize “view” (so that pixels:texels::1:1) glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 1, 0, 1, 0, 1); glViewport(0, 0, outTexResX, outTexResY); For each algorithm step: For each algorithm step: Activate render-to-texture Activate render-to-texture Setup input textures, fragment program Setup input textures, fragment program Draw a full-screen quad (1x1) Draw a full-screen quad (1x1)

20 Example: N-Body Simulation Brute force  Brute force  N = 8192 bodies N = 8192 bodies N 2 gravity computations N 2 gravity computations 64M force comps. / frame 64M force comps. / frame ~25 flops per force ~25 flops per force 10.5 fps 10.5 fps 17+ GFLOPs sustained 17+ GFLOPs sustained GeForce 7800 GTX GeForce 7800 GTX Nyland, Harris, Prins, GP poster

21 Computing Gravitational Forces Each body attracts all other bodies Each body attracts all other bodies N bodies, so N 2 forces N bodies, so N 2 forces Draw into an NxN buffer Draw into an NxN buffer Pixel (i,j) computes force between bodies i and j Pixel (i,j) computes force between bodies i and j Very simple fragment program Very simple fragment program More than 2048 bodies makes it trickier More than 2048 bodies makes it trickier –Limited by max pbuffer size… –“exercise for the reader”

22 Computing Gravitational Forces N-body force Texture force(i,j) force(i,j) N i N 0 j i j Body Position Texture F(i,j) = gM i M j / r(i,j) 2, r(i,j) = | pos(i) - pos(j) | Force is proportional to the inverse square of the distance between bodies

23 Computing Gravitational Forces float4 force(float2 ij : WPOS, uniform sampler2D pos) : COLOR0 { // Pos texture is 2D, not 1D, so we need to // convert body index into 2D coords for pos tex float4 iCoords = getBodyCoords(ij); float4 iPosMass = texture2D(pos, iCoords.xy); float4 jPosMass = texture2D(pos, iCoords.zw); float3 dir = iPos.xyz - jPos.xyz; float r2 = dot(dir, dir); dir = normalize(dir); return dir * g * iPosMass.w * jPosMass.w / r2; }

24 Computing Total Force Have: array of (i,j) forces Have: array of (i,j) forces Need: total force on each particle i Need: total force on each particle i Sum of each column of the force array Sum of each column of the force array Can do all N columns in parallel Can do all N columns in parallel This is called a Parallel Reduction force(i,j) N-body force Texture N i N 0

25 Parallel Reductions 1D parallel reduction: 1D parallel reduction: sum N columns or rows in parallel sum N columns or rows in parallel add two halves of texture together add two halves of texture together repeatedly... repeatedly... Until we’re left with a single row of texels Until we’re left with a single row of texels + NxNNxNNxNNxN Nx(N/2) Nx(N/4) Nx1 Requires log 2 N steps

26 Update Positions and Velocities Now we have a 1-D array of total forces Now we have a 1-D array of total forces One per body One per body Update Velocity Update Velocity u(i,t+dt) = u(i,t) + F total (i) * dt u(i,t+dt) = u(i,t) + F total (i) * dt Simple pixel shader reads previous velocity and force textures, creates new velocity texture Simple pixel shader reads previous velocity and force textures, creates new velocity texture Update Position Update Position x(i, t+dt) = x(i,t) + u(i,t) * dt x(i, t+dt) = x(i,t) + u(i,t) * dt Simple pixel shader reads previous position and velocity textures, creates new position texture Simple pixel shader reads previous position and velocity textures, creates new position texture

27 Summary Presented mappings of basic computational concepts to GPUs Presented mappings of basic computational concepts to GPUs Basic concepts and terminology Basic concepts and terminology For introductory “Hello GPGPU” sample code, see For introductory “Hello GPGPU” sample code, see Only the beginning: Only the beginning: Rest of course presents advanced techniques, strategies, and specific algorithms. Rest of course presents advanced techniques, strategies, and specific algorithms.