Presentation is loading. Please wait.

Presentation is loading. Please wait.

CIS 565: GPU Programming and Architecture Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider and Patrick Cozzi.

Similar presentations


Presentation on theme: "CIS 565: GPU Programming and Architecture Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider and Patrick Cozzi."— Presentation transcript:

1 CIS 565: GPU Programming and Architecture Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider and Patrick Cozzi

2 Administrivia Meeting Monday and Wednesday Monday and Wednesday 1:30-3:00pm 1:30-3:00pm Towne 309 Towne 309 Recorded lectures upon request Recorded lectures upon request Website: http://www.seas.upenn.edu/~cis565/ Website: http://www.seas.upenn.edu/~cis565/

3 Administrivia Instructor: Joseph Kider Instructor: Joseph Kider kiderj@seas kiderj@seas

4 Administrivia Teaching Assistant Qing Sun Qing Sun

5 Administrivia Prerequisites Prerequisites CIS 460: Introduction to Computer Graphics CIS 460: Introduction to Computer Graphics CIS 501: Computer Architecture CIS 501: Computer Architecture Most important: Most important: C/C++ and OpenGL C/C++ and OpenGL

6 CIS 534: Multicore Programming and Architecture Course Description Course Description This course is a pragmatic examination of multicore programming and the hardware architecture of modern multicore processors. Unlike the sequential single-core processors of the past, utilizing a multicore processor requires programmers to identify parallelism and write explicitly parallel code. Topics covered include: the relevant architectural trends and aspects of multicores, approaches for writing multicore software by extracting data parallelism (vectors and SIMD), thread-level parallelism, and task-based parallelism, efficient synchronization, and program profiling and performance tuning. The course focuses primarily on mainstream shared- memory multicores with some coverage of graphics processing units (GPUs). Cluster-based supercomputing is not a focus of this course. Several programming assignments and a course project will provide students first-hand experience with programming, experimentally analyzing, and tuning multicore software. Students are expected to have a solid understanding of computer architecture and strong programming skills (including experience with C/C++). This course is a pragmatic examination of multicore programming and the hardware architecture of modern multicore processors. Unlike the sequential single-core processors of the past, utilizing a multicore processor requires programmers to identify parallelism and write explicitly parallel code. Topics covered include: the relevant architectural trends and aspects of multicores, approaches for writing multicore software by extracting data parallelism (vectors and SIMD), thread-level parallelism, and task-based parallelism, efficient synchronization, and program profiling and performance tuning. The course focuses primarily on mainstream shared- memory multicores with some coverage of graphics processing units (GPUs). Cluster-based supercomputing is not a focus of this course. Several programming assignments and a course project will provide students first-hand experience with programming, experimentally analyzing, and tuning multicore software. Students are expected to have a solid understanding of computer architecture and strong programming skills (including experience with C/C++). We will not overlap very much We will not overlap very much

7 What is GPU (Parallel) Computing Parallel computing: using multiple processors to… Parallel computing: using multiple processors to… More quickly perform a computation, or More quickly perform a computation, or Perform a larger computation in the same time Perform a larger computation in the same time PROGRAMMER expresses parallelism PROGRAMMER expresses parallelism Slide curiosity of Milo Martin Clusters of Computers : MPI, networks, cloud computing …. Shared memory Multiprocessor Called “multicore” when on the same chip GPU: Graphics processing units NOT COVERED CIS 534 MULTICORE COURSE FOCUS CIS 565

8 Administrivia Course Overview Course Overview System and GPU architecture System and GPU architecture Real-time graphics programming with Real-time graphics programming with OpenGL and GLSL OpenGL and GLSL General purpose programming with General purpose programming with CUDA and OpenCL CUDA and OpenCL Problem domain: up to you Problem domain: up to you Hands-on Hands-on

9 Administrivia Goals Goals Program massively parallel processors: Program massively parallel processors: High performance High performance Functionality and maintainability Functionality and maintainability Scalability Scalability Gain Knowledge Gain Knowledge Parallel programming principles and patterns Parallel programming principles and patterns Processor architecture features and constraints Processor architecture features and constraints Programming API, tools, and techniques Programming API, tools, and techniques

10 Administrivia Grading Homeworks (4-5) 40% Homeworks (4-5) 40% Paper Presentation 10% Paper Presentation 10% Final Project 40% + 5% Final Project 40% + 5% Final10% Final10%

11 Administrivia Bonus days: five per person Bonus days: five per person No-questions-asked one-day extension No-questions-asked one-day extension Multiple bonus days can be used on the same assignment Multiple bonus days can be used on the same assignment Can be used for most, but not all assignments Can be used for most, but not all assignments Strict late policy: not turned by: Strict late policy: not turned by: 11:59pm of due date: 25% deduction 11:59pm of due date: 25% deduction 2 days late: 50% 2 days late: 50% 3 days late: 75% 3 days late: 75% 4 or more days: 100% 4 or more days: 100% Add a Readme when using bonus days Add a Readme when using bonus days

12 Administrivia Academic Honesty Academic Honesty Discussion with other students, past or present, is encouraged Discussion with other students, past or present, is encouraged Any reference to assignments from previous terms or web postings is unacceptable Any reference to assignments from previous terms or web postings is unacceptable Any copying of non-trivial code is unacceptable Any copying of non-trivial code is unacceptable Non-trivial = more than a line or so Non-trivial = more than a line or so Includes reading someone else’s code and then going off to write your own. Includes reading someone else’s code and then going off to write your own.

13 Administrivia Academic Honesty Academic Honesty Penalties for academic dishonesty: Penalties for academic dishonesty: Zero on the assignment for the first occasion Zero on the assignment for the first occasion Automatic failure of the course for repeat offenses Automatic failure of the course for repeat offenses

14 Administrivia Textbook: None Textbook: None Related graphics books: Related graphics books: Graphics Shaders Graphics Shaders OpenGL Shading Language OpenGL Shading Language GPU Gems 1 - 3 GPU Gems 1 - 3 Related general GPU books: Related general GPU books: Programming Massively Parallel Processors Programming Massively Parallel Processors Patterns for Parallel Programming Patterns for Parallel Programming

15 Administrivia Do I need a GPU? Do I need a GPU? Yes: NVIDIA GeForce 8 series or higher Yes: NVIDIA GeForce 8 series or higher No No Moore 100b - NVIDIA GeForce 9800s Moore 100b - NVIDIA GeForce 9800s SIG Lab - NVIDIA GeForce 8800s, two GeForce 480s, and one Fermi Tesla SIG Lab - NVIDIA GeForce 8800s, two GeForce 480s, and one Fermi Tesla

16 Administrivia Demo: What GPU do I have? Demo: What GPU do I have? Demo: What version of OpenGL/CUDA/OpenCL does it support? Demo: What version of OpenGL/CUDA/OpenCL does it support? Demo

17 Aside: This class is about 3 things PERFORMANCE PERFORMANCE Ok, not really Ok, not really Also about correctness, “-abilities”, etc. Also about correctness, “-abilities”, etc. Nitty Gritty real world wall-clock performance Nitty Gritty real world wall-clock performance No Proofs! No Proofs! Slide curiosity of Milo Martin

18 Exercise Parallel Sorting Parallel Sorting

19 Credits David Kirk (NVIDIA) David Kirk (NVIDIA) Wen-mei Hwu (UIUC) Wen-mei Hwu (UIUC) David Lubke David Lubke Wolfgang Engel Wolfgang Engel Etc. etc. Etc. etc.

20 What is a GPU? GPU: Graphics Processing Unit Processor that resides on your graphics card. GPUs allow us to achieve the unprecedented graphics capabilities now available in games

21 What is a GPU? Demo: NVIDIA GTX 400 Demo: NVIDIA GTX 400NVIDIA GTX 400NVIDIA GTX 400 Demo: Triangle throughput Demo: Triangle throughput

22 Why Program the GPU ? Chart from: http://ixbtlabs.com/articles3/video/cuda-1-p1.htmlhttp://ixbtlabs.com/articles3/video/cuda-1-p1.html

23 Why Program the GPU ? Compute Compute Intel Core i7 – 4 cores – 100 GFLOP Intel Core i7 – 4 cores – 100 GFLOP NVIDIA GTX280 – 240 cores – 1 TFLOP NVIDIA GTX280 – 240 cores – 1 TFLOP Memory Bandwidth Memory Bandwidth System Memory – 60 GB/s System Memory – 60 GB/s NVIDIA GT200 – 150 GB/s NVIDIA GT200 – 150 GB/s Install Base Install Base Over 200 million NVIDIA G80s shipped Over 200 million NVIDIA G80s shipped

24 How did this happen? Games demand advanced shading Games demand advanced shading Fast GPUs = better shading Fast GPUs = better shading Need for speed = continued innovation Need for speed = continued innovation The gaming industry has overtaken the defense, finance, oil and healthcare industries as the main driving factor for high performance processors. The gaming industry has overtaken the defense, finance, oil and healthcare industries as the main driving factor for high performance processors.

25 GPU = Fast co-processor ? GPU speed increasing at cubed-Moore’s Law. GPU speed increasing at cubed-Moore’s Law. This is a consequence of the data-parallel streaming aspects of the GPU. This is a consequence of the data-parallel streaming aspects of the GPU. GPUs are cheap! Put a couple together, and you can get a super-computer. GPUs are cheap! Put a couple together, and you can get a super-computer. NYT May 26, 2003: TECHNOLOGY; From PlayStation to Supercomputer for $50,000: National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign builds supercomputer using 70 individual Sony Playstation 2 machines; project required no hardware engineering other than mounting Playstations in a rack and connecting them with high-speed network switch So can we use the GPU for general-purpose computing ?

26 Yes ! Wealth of applications Voronoi Diagrams Data AnalysisMotion Planning Geometric Optimization Physical Simulation Matrix Multiplication Conjugate Gradient Sorting and Searching Force-field simulation Particle Systems Molecular Dynamics Graph Drawing Signal Processing Database queries Range queries … and graphics too !! Image Processing Radar, Sonar, Oil ExplorationFinance Planning Optimization

27 When does “GPU=fast co-processor” work ? Real-time visualization of complex phenomena The GPU (like a fast parallel processor) can simulate physical processes like fluid flow, n-body systems, molecular dynamics In general: Massively Parallel Tasks

28 When does “GPU=fast co- processor” work ? Interactive data analysis For effective visualization of data, interactivity is key

29 When does “GPU=fast co-processor” work ? Rendering complex scenes Procedural shaders can offload much of the expensive rendering work to the GPU. Still not the Holy Grail of “80 million triangles at 30 frames/sec*”, but it helps. * Alvy Ray Smith, Pixar. Note: NVIDIA Quadro 5000 is calculated to push 950 million triangles per second http://www.nvidia.com/object/product-quadro-5000-us.html

30 Stream Programming A stream is a sequence of data (could be numbers, colors, RGBA vectors,…) A stream is a sequence of data (could be numbers, colors, RGBA vectors,…) A kernel is a (fragment) program that runs on each element of a stream, generating an output stream (pixel buffer). A kernel is a (fragment) program that runs on each element of a stream, generating an output stream (pixel buffer).

31 Stream Programming Kernel = vertex/fragment shader Kernel = vertex/fragment shader Input stream = stream of vertices, primitives, or fragments Input stream = stream of vertices, primitives, or fragments Output stream = frame buffer or other buffer (transform feedback) Output stream = frame buffer or other buffer (transform feedback) Multiple kernels = multi-pass rendering sequence on the GPU. Multiple kernels = multi-pass rendering sequence on the GPU.

32 To program the GPU, one must think of it as a (parallel) stream processor.

33 What is the cost of a stream program ? Number of kernels Number of kernels Readbacks from the GPU to main memory are expensive, and so is transferring data to the GPU. Readbacks from the GPU to main memory are expensive, and so is transferring data to the GPU. Complexity of kernel Complexity of kernel More complexity takes longer to move data through a rendering pipeline More complexity takes longer to move data through a rendering pipeline Number of memory accesses Number of memory accesses Non-local memory access is expensive Non-local memory access is expensive Number of branches Number of branches Divergent branches are expensive Divergent branches are expensive

34 What will this course cover ?

35 1. Stream Programming Principles OpenGL programmable pipeline OpenGL programmable pipeline The principles of stream hardware The principles of stream hardware How do we program with streams? How do we program with streams?

36 2. Shaders and Effects How do we compute complex effects found in today’s games? Examples: How do we compute complex effects found in today’s games? Examples: Parallax Mapping Parallax Mapping Reflections Reflections Skin and Hair Skin and Hair Particle Systems Particle Systems Deformable Mesh Deformable Mesh Morphing Morphing Animation Animation

37 3. GPGPU / GPU Computing How do we use the GPU as a fast co-processor? How do we use the GPU as a fast co-processor? GPGPU Languages: CUDA and OpenCL GPGPU Languages: CUDA and OpenCL High Performance Computing High Performance Computing Numerical methods and linear algebra: Numerical methods and linear algebra: Inner products Inner products Matrix-vector operations Matrix-vector operations Matrix-Matrix operations Matrix-Matrix operations Sorting Sorting Fluid Simulations Fluid Simulations Fast Fourier Transforms Fast Fourier Transforms Graph Algorithms Graph Algorithms And More… And More… At what point does the GPU become faster than the CPU for matrix operations ? For other operations ? At what point does the GPU become faster than the CPU for matrix operations ? For other operations ?

38 4. Optimizations How do we use the full potential of the GPU? How do we use the full potential of the GPU? What tools are there to analyze the performance of our algorithms? What tools are there to analyze the performance of our algorithms?

39 What we want you to get out of this course! 1. Understanding of the GPU as a graphics pipeline 2. Understanding of the GPU as a high performance compute device 3. Understanding of GPU architectures 4. Programming in GLSL, CUDA, and OpenCL 5. Exposure to many core graphics effects performed on GPUs 6. Exposure to many core parallel algorithms performed on GPUs


Download ppt "CIS 565: GPU Programming and Architecture Original Slides by: Suresh Venkatasubramanian Updates by Joseph Kider and Patrick Cozzi."

Similar presentations


Ads by Google