On Dynamic Load Balancing on Graphics Processors Daniel Cederman and Philippas Tsigas Chalmers University of Technology
Overview Motivation Methods Experimental evaluation Conclusion
The problem setting Work Task Offline Online
Static Load Balancing Processor
Static Load Balancing Processor Task
Static Load Balancing Processor Task
Static Load Balancing Processor Task Subtask
Static Load Balancing Processor Task Subtask
Dynamic Load Balancing Processor Task Subtask
Task sharing Work done? Try to get task New tasks? Perform task Got task? Add task Task Set No, retry Check condition Acquire Task Add Task No, continue Task Done
System Model CUDA Global Memory Gather and scatter Compare-And-Swap Fetch-And-Inc Multiprocessors Maximum number of concurrent thread blocks Multi- processor Thread Block Multi- processor Thread Block Multi- processor Thread Block Global Memory
Synchronization Blocking Uses mutual exclusion to only allow one process at a time to access the object. Lockfree Multiple processes can access the object concurrently. At least one operation in a set of concurrent operations finishes in a finite number of its own steps. Waitfree Multiple processes can access the object concurrently. Every operation finishes in a finite number of its own steps.
Load Balancing Methods Blocking Task Queue Non-blocking Task Queue Task Stealing Static Task List
Blocking queue TB 1 TB 2 TB n Free Head Tail
Blocking queue TB 1 TB 2 TB n Free Head Tail
Blocking queue T1T1 TB 1 TB 2 TB n Free Head Tail
Blocking queue T1T1 TB 1 TB 2 TB n Free Head Tail
Blocking queue T1T1 TB 1 TB 2 TB n Free Head Tail
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 TB 1 TB 2 TB 1 TB 2 TB n Head Tail Reference P. Tsigas and Y. Zhang, A simple, fast and scalable non- blocking concurrent FIFO queue for shared memory multiprocessor systems [SPAA01]
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 TB 1 TB 2 TB 1 TB 2 TB n Head Tail
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 TB 1 TB 2 TB 1 TB 2 TB n Head Tail
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 TB 1 TB 2 TB 1 TB 2 TB n Head Tail
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 T5T5 TB 1 TB 2 TB 1 TB 2 TB n Head Tail
Non-blocking Queue T1T1 T2T2 T3T3 T4T4 T5T5 TB 1 TB 2 TB 1 TB 2 TB n Head Tail
Task stealing T1T1 T3T3 T2T2 TB 1 TB 2 TB n Reference Arora N. S., Blumofe R. D., Plaxton C. G., Thread Scheduling for Multiprogrammed Multiprocessors [SPAA 98]
Task stealing T1T1 T4T4 T3T3 T2T2 TB 1 TB 2 TB n
Task stealing T1T1 T4T4 T5T5 T3T3 T2T2 TB 1 TB 2 TB n
Task stealing T1T1 T4T4 T3T3 T2T2 TB 1 TB 2 TB n
Task stealing T1T1 T3T3 T2T2 TB 1 TB 2 TB n
Task stealing T3T3 T2T2 TB 1 TB 2 TB n
Task stealing T2T2 TB 1 TB 2 TB n
Static Task List T1T1 T2T2 T3T3 T4T4 In
Static Task List T1T1 T2T2 T3T3 T4T4 In TB 1 TB 2 TB 3 TB 4
Static Task List T1T1 T2T2 T3T3 T4T4 In Out TB 1 TB 2 TB 3 TB 4
Static Task List T1T1 T2T2 T3T3 T4T4 T5T5 In Out TB 1 TB 2 TB 3 TB 4
Static Task List T1T1 T2T2 T3T3 T4T4 T5T5 T6T6 In Out TB 1 TB 2 TB 3 TB 4
Static Task List T1T1 T2T2 T3T3 T4T4 T5T5 T6T6 T7T7 In Out TB 1 TB 2 TB 3 TB 4
Octree Partitioning Bandwidth bound
Octree Partitioning Bandwidth bound
Octree Partitioning Bandwidth bound
Octree Partitioning Bandwidth bound
Four-in-a-row Computation intensive
Graphics Processors 8800GT 14 Multiprocessors 57 GB/sec bandwidth 9600GT 8 Multiprocessors 57 GB/sec bandwidth
Blocking Queue – Octree/9600GT
Blocking Queue – Octree/8800GT
Blocking Queue – Four-in-a-row
Non-blocking Queue – Octree/9600GT
Non-blocking Queue – Octree/8800GT
Non-blocking Queue - Four-in-a-row
Task stealing – Octree/9600GT
Task stealing – Octree/8800GT
Task stealing – Four-in-a-row
Static List
Octree Comparison
Previous work Korch M., Raubert T., A comparison of task pools for dynamic load balancing of irregular algorithms, Concurrency and Computation: Practice & Experience, 16, 2003 Heirich A., Arvo J., A competetive analysis of load balancing strategies for parallel ray tracing, Journal of Supercomputing, 12, 1998 Foley T., Sugerman J., KD-tree acceleration structures for a GPU raytracer, Graphics Hardware 2005
Conclusion Synchronization plays a significant role in dynamic load- balancing Lock-free data structures/synchronization scales well and looks promising also in the GPU general purpose programming Locks perform poorly It is good that operations such as CAS and FAA have been introduced in the new GPUs Work stealing could outperform static load balancing
Thank you!