©Wen-mei W. Hwu and David Kirk/NVIDIA 2010 ECE 498HK Computational Thinking for Many-core Computing Lecture 15: Dealing with Dynamic Data.

Slides:



Advertisements
Similar presentations
Optimization on Kepler Zehuan Wang
Advertisements

IIIT Hyderabad Hybrid Ray Tracing and Path Tracing of Bezier Surfaces using a mixed hierarchy Rohit Nigam, P. J. Narayanan CVIT, IIIT Hyderabad, Hyderabad,
ECE 598HK Computational Thinking for Many-core Computing Lecture 2: Many-core GPU Performance Considerations © Wen-mei W. Hwu and David Kirk/NVIDIA,
L15: Tree-Structured Algorithms on GPUs CS6963L15: Tree Algorithms.
Graph Theory, DFS & BFS Kelly Choi What is a graph? A set of vertices and edges –Directed/Undirected –Weighted/Unweighted –Cyclic/Acyclic.
CSE 373: Data Structures and Algorithms Lecture 19: Graphs III 1.
An Effective GPU Implementation of Breadth-First Search Lijuan Luo, Martin Wong and Wen-mei Hwu Department of Electrical and Computer Engineering, UIUC.
Sparse LU Factorization for Parallel Circuit Simulation on GPU Ling Ren, Xiaoming Chen, Yu Wang, Chenxi Zhang, Huazhong Yang Department of Electronic Engineering,
Cost-based Workload Balancing for Ray Tracing on a Heterogeneous Platform Mario Rincón-Nigro PhD Showcase Feb 17 th, 2012.
GPUs. An enlarging peak performance advantage: –Calculation: 1 TFLOPS vs. 100 GFLOPS –Memory Bandwidth: GB/s vs GB/s –GPU in every PC and.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
Back-Projection on GPU: Improving the Performance Wenlay “Esther” Wei Advisor: Jeff Fessler Mentor: Yong Long April 29, 2010.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 19, 2011 Emergence of GPU systems and clusters for general purpose High Performance Computing.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
Accelerating SYMV kernel on GPUs Ahmad M Ahmad, AMCS Division, KAUST
Adnan Ozsoy & Martin Swany DAMSL - Distributed and MetaSystems Lab Department of Computer Information and Science University of Delaware September 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
Introduction to Routing. The Routing Problem Apply after placement Input: –Netlist –Timing budget for, typically, critical nets –Locations of blocks and.
Slide 1/8 Performance Debugging for Highly Parallel Accelerator Architectures Saurabh Bagchi ECE & CS, Purdue University Joint work with: Tsungtai Yeh,
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 11 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li.
Parallel Applications Parallel Hardware Parallel Software IT industry (Silicon Valley) Users Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CMPS 5433 Dr. Ranette Halverson Programming Massively.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
The introduction of GPGPU and some implementations on model checking Zhimin Wu Nanyang Technological University, Singapore.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 4: CUDA Threads – Part 2.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 Final Project Notes.
© David Kirk/NVIDIA, Wen-mei W. Hwu, and John Stratton, ECE 498AL, University of Illinois, Urbana-Champaign 1 CUDA Lecture 7: Reductions and.
Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University of Seoul) Chao-Yue Lai (UC Berkeley) Slav Petrov (Google Research) Kurt Keutzer (UC Berkeley)
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 13: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 15: Basic Parallel Programming Concepts.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
AUTO-GC: Automatic Translation of Data Mining Applications to GPU Clusters Wenjing Ma Gagan Agrawal The Ohio State University.
Martin Kruliš by Martin Kruliš (v1.0)1.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 19: Atomic.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
مرتضي صاحب الزماني 1 Maze Routing. Homework 4 مهلت تحویل : 23 اردیبهشت پروژه 1 : انتخاب طرح : امروز مرتضي صاحب الزماني 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
CS 179: GPU Computing Recitation 2: Synchronization, Shared memory, Matrix Transpose.
© David Kirk/NVIDIA and Wen-mei W
VLSI Physical Design Automation
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
Mattan Erez The University of Texas at Austin
Parallel Computation Patterns (Reduction)
Programming Massively Parallel Processors Lecture Slides for Chapter 9: Application Case Study – Electrostatic Potential Calculation © David Kirk/NVIDIA.
ECE 498AL Lecture 15: Reductions and Their Implementation
ECE498AL Spring 2010 Lecture 4: CUDA Threads – Part 2
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
6- General Purpose GPU Programming
Presentation transcript:

©Wen-mei W. Hwu and David Kirk/NVIDIA 2010 ECE 498HK Computational Thinking for Many-core Computing Lecture 15: Dealing with Dynamic Data

Objective To understand the nature of dynamic input data extraction To learn efficient queue structures that support massively parallel extraction of input data form bulk data structures To learn efficient kernel structures that cope with dynamic variation of working set size ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Examples of Computation with Dynamic Input Data Graph search and optimization problems –Breadth-first search –Shortest path –Graph coverage – … Rendering and game physics –Ray tracing –Collision detection –… ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Dynamic Data Extraction The data to be processed in each phase of computation need to be dynamically determined and extracted from a bulk data structure –Harder when the bulk data structure is not organized for massively parallel access, such as graphs. Graph algorithms are popular examples that deal with dynamic data –Widely used in EDA and large scale optimization applications –We will use Breadth-First Search (BFS) as an example ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Main Challenges of Dynamic Data Input data need to be organized for locality, coalescing, and contention avoidance as they are extracted during execution The amount of work and level of parallelism often grow and shrink during execution –As more or less data is extracted during each phase –Hard to efficiently fit into one CUDA/OPenCL kernel configuration, which cannot be changed once launched –Different kernel strategies fit different data sizes ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Breadth-First Search (BFS) sr t u v wx y sr t u v wx y sr t u v wx y sr t u v wx y s r w v t x uy Frontier vertex Level 1 Level 2 Level 3 Level 4 Visited vertex ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Sequential BFS Store the frontier in a queue Complexity (O(V+E)) s r w v t x u y Level 1 Level 2 Level 3 Level 4 sr tv x y wu DequeueEnqueue ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Parallelism in BFS Parallel Propagation in each level One GPU kernel per level s r w v t x uy Parallel v t x uy Level 1 Level 2 Level 3 Level 4 Kernel 3 Example kernel Kernel 4 Parallel global barrier ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

BFS in VLSI CAD Maze Routing blockage net terminal ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

BFS in VLSI CAD Logic Simulation/Timing Analysis ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

BFS in VLSI CAD In formal verification for reachabiliy analysis. In clustering for finding connected components. In logic synthesis …….. ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Potential Pitfall of Parallel Algorithms Greatly accelerated n 2 algorithm is still slower than an nlogn algorithm. Always need to keep an eye on fast sequential algorithm as the baseline. Running Time Problem Size ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Node Oriented Parallelization IIIT-BFS –P. Harish et. al. “Accelerating large graph algorithms on the GPU using CUDA” –Each thread is dedicated to one node –Every thread examines neighbor nodes to determine if its node will be a frontier node in the next phase –Complexity O(VL+E) (Compared with O(V+E)) –Slower than the sequential version for large graphs Especially for sparsely connect graphs rst uvwx y rst uvwx y v t x uy ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Matrix-based Parallelization Yangdong Deng et. al. “Taming Irregular EDA applications on GPUs” Propagation is done through matrix-vector multiplication –For sparsely connected graphs, the connectivity matrix will be a sparse matrix Complexity O(V+EL) (compared with O(V+E)) –Slower than sequential for large graphs s u v s u v s u v s u v s uv ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Need a More General Technique To efficiently handle most graph types Use more specialized formulation when appropriate as an optimization Efficient queue-based parallel algorithms –Hierarchical scalable queue implementation –Hierarchical kernel arrangements ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

An Initial Attempt Manage the queue structure –Complexity: O(V+E) –Dequeue in parallel –Each frontier node is a thread –Enqueue in sequence. Poor coalescing Poor scalability –No speedup vtx u y uy v t x uy Parallel Sequential ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Parallel Insert-Compact Queues C.Lauterbach et.al.“Fast BVH Construction on GPUs” Parallel enqueue with compaction cost Not suitable for light-node problems vt y x u Φ ΦΦΦ uy Compact Propagate v t x uy ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Basic ideas Each thread processes one or more frontier nodes Find the index of each new frontier node Build queue of next frontier hierarchically h Local queues Global queue q1 q2 q3 Index = offset of q2 (#node in q1) + index in q2 a bc gij Local Globalh ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Two-level Hierarchy Block queue (b-queue) –Inserted by all threads in a block –Reside in Shared Memory Global queue (g-queue) –Inserted only when a block completes Problem: –Collision on b-queues –Threads in the same block can cause heavy contention g-queue b-queue Global Mem Shared Mem ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Warp-level Queue Time WiT7WiT7 WiT6WiT6 WiT0WiT0 W i T 15 W i T 14 WiT8WiT8 W i T 23 W i T 22 W i T 16 W i T 30 W i T 29 W i T 24 WjT7WjT7 WjT6WjT6 WjT0WjT0 W j T 15 W j T 14 WjT8WjT8 W j T 23 W j T 22 W j T 16 W j T 30 W j T 29 W j T 24 Warp i Warp j Thread Scheduling Divide threads into 8 groups (for GTX280) –Number of SP’s in each SM in general –One queue to be written by each SP in the SM Each group writes to one warp-level queue Still should use atomic operation –But much lower level of contention ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Three-level hierarchy w-queue b-queue g-queue b-queue ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

Shared Memory: –Interleaved queue layout, no bank conflict Global Memory: –Coalescing when releasing a b-queue to g-queue –Moderate contention across blocks Texture memory : –Store graph structure (random access, no coalescing) –Fermi cache may help. W-queues[][8] Hierarchical Queue Management W-queue0W-queue1 W-queue7

Hierarchical Queue Management Advantage and limitation –The technique can be applied to any inherently sequential data structure –The w-queues and b-queues are limited by the capacity of shared memory. If we know the upper limit of the degree, we can adjust the number of threads per block accordingly. ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010

ANY FURTHER QUESTIONS? ©Wen-mei W. Hwu and David Kirk/NVIDIA 2010