Theory: Asleep at the Switch to Many-Core Phillip B. Gibbons Intel Research Pittsburgh Workshop on Theory and Many-Core May 29, 2009 Slides are © Phillip B. Gibbons
Two Decades after the peak of Theory’s interest in parallel computing… The Age of Many-Core is finally underway Fueled by Moore’s Law: 2X cores per chip every 18 months All aboard the parallelism train! (Almost) The only way to faster apps Theory: Asleep at the Switch to Many-Core
All Aboard the Parallelism Train? Switch to Many-Core…Many Challenges Interest waned long ago Yet problems were NOT solved Research needed in all aspects of Many-Core Computer Architecture Programming Languages & Compilers Operating & Runtime Systems Theory YES! YES! YES! Who has answered the call? Theory: Asleep at the Switch to Many-Core
Theory: Asleep at the Switch “Engineer driving derailed Staten Island train may have fallen asleep at the switch.” (12/26/08) Theory needs to wake-up & regain a leadership role in parallel computing Theory: Asleep at the Switch to Many-Core
Theory’s Strengths New Algorithmic Paradigms Provable Correctness Conceptual Models Abstract models of computation New Algorithmic Paradigms New algorithms, new protocols Provable Correctness Safety, liveness, security, privacy,… Provable Performance Guarantees Approximation, probabilistic, new metrics Inherent Power/Limitations Of primitives, features,… …among others Theory: Asleep at the Switch to Many-Core
Five Areas in Which Theory Can (Should) Have an Important Impact Parallel Thinking Memory Hierarchy Asymmetry/Heterogeniety Concurrency Primitives Power Montparnasse 1895 Theory: Asleep at the Switch to Many-Core
Impact Area: Parallel Thinking Key: Good Model of Parallel Computation Express Parallelism Good parallel programmer’s model Good for teaching, teaching “how to think” Can be engineered to good performance Theory: Asleep at the Switch to Many-Core
Impact Area: Memory Hierarchy Deep cache/storage hierarchy Need conceptual model Need smart thread schedulers Theory: Asleep at the Switch to Many-Core
Impact Area: Asymmetry/Heterogeniety Fat/Thin cores SIMD extensions Multiple coherence domains Mixed-mode parallelism Virtual Machines ... Theory: Asleep at the Switch to Many-Core
Impact Area: Concurrency Primitives Parallel prefix Hash map [Herlihy08] Map reduce [Karloff09] Transactional memory Memory block transactions [Blelloch08] Graphics primitives [Ha08] Maurice Herlihy, Nir Shavit, Moran Tzafrir, DISC’08 Howard Karloff, Siddharth Suri, Sergei Vassilvitskii, this workshop Guy E. Blelloch, Phillip B. Gibbons, S. Harsha Vardhan, SPAA’08 Phuong Ha, Philippas Tsigas, Otto Anshus, DISC’08 Make the case Many-Core should (not) support Improve the algorithm Recommend new primitives (prescriptive) Theory: Asleep at the Switch to Many-Core
Fertile area for Theory help Impact Area: Power Many-cores provide features for reducing power Voltage scaling [Albers07] Dynamically run on fewer cores, fewer banks Susanne Albers, Fabian Muller, Swen Schmelzer, SPAA’07 Fertile area for Theory help Theory: Asleep at the Switch to Many-Core
Deep Dive: Memory Hierarchy Deep cache/storage hierarchy Need conceptual model Need smart thread schedulers Theory: Asleep at the Switch to Many-Core
Good Performance Requires Effective Use of the Memory Hierarchy CPU Performance: Running/response time Throughput Power L1 L2 Cache Main Memory Magnetic Disks Two new trends: Pervasive Multicore & Pervasive Flash bring new challenges and opportunities Theory: Asleep at the Switch to Many-Core
New Trend 1: Pervasive Multicore CPU L1 CPU L1 CPU L1 Challenges Cores compete for hierarchy Hard to reason about parallel performance Hundred cores coming soon Cache hierarchy design in flux Hierarchies differ across platforms Opportunity Rethink apps & systems to take advantage of more CPUs on chip Shared L2 Cache L2 Cache Main Memory Magnetic Disks Makes Effective Use of Hierarchy Much Harder Theory: Asleep at the Switch to Many-Core
New Trend 2: Pervasive Flash CPU L1 CPU L1 CPU L1 Opportunity Rethink apps & systems to take advantage Challenges Performance quirks of Flash Technology in flux, e.g., Flash Translation Layer (FTL) Shared L2 Cache Flash Devices Main Memory Magnetic Disks New Type of Storage in the Hierarchy Theory: Asleep at the Switch to Many-Core
How Hierarchy is Treated Today Algorithm Designers & Application/System Developers often tend towards one of two extremes Ignorant (Pain)-Fully Aware API view: Memory + I/O; Parallelism often ignored [Performance iffy] Hand-tuned to platform [Effort high, Not portable, Limited sharing scenarios] Or they focus on one or a few aspects, but without a comprehensive view of the whole Theory: Asleep at the Switch to Many-Core
Hierarchy-Savvy Parallel Algorithm Design (Hi-Spade) project …seeks to enable: A hierarchy-savvy approach to algorithm design & systems for emerging parallel hierarchies Hide what can be hid Expose what must be exposed for good performance Robust: many platforms, many resource sharing scenarios Sweet-spot between ignorant and (pain)fully aware http://www.pittsburgh.intel-research.net/projects/hi-spade/ “Hierarchy-Savvy” http://www.pittsburgh.intel-research.net/projects/hi-spade/ Theory: Asleep at the Switch to Many-Core
Hi-Spade Research Scope A hierarchy-savvy approach to algorithm design & systems for emerging parallel hierarchies Research agenda includes Theory: conceptual models, algorithms, analytical guarantees Systems: runtime support, performance tools, architectural features Applications: databases, operating systems, application kernels Theory: Asleep at the Switch to Many-Core
Cache Hierarchies: Sequential Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Cache Hierarchies: Sequential External Memory (EM) Algorithms Main Memory (size M) External Memory Block size B External Memory Model [See Vitter’s ACM Surveys article] Jeffrey S. Vitter, ACM Computing Surveys, 2001 + Simple model + Minimize I/Os – Only 2 levels Theory: Asleep at the Switch to Many-Core
Cache-Oblivious Algorithms Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Cache Hierarchies: Sequential Alternative: Cache-Oblivious Algorithms [Frigo99] Main Memory (size M) External Memory Block size B Cache-Oblivious Model Twist on EM Model: M & B unknown to Algorithm + simple model Key Goal: Good performance for any M & B Matteo Frigo, Charles E. Leiserson, Harald Prokop, Sridhar Ramachandran, FOCS’99 + Key Goal Guaranteed good cache performance at all levels of hierarchy – Single CPU only Theory: Asleep at the Switch to Many-Core
Cache Hierarchies: Parallel Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Cache Hierarchies: Parallel Explicit Multi-level Hierarchy: Multi-BSP Model [Valiant08] Goal: Approach simplicity of cache-oblivious model Hierarchy-Savvy sweet spot Leslie G. Valiant, ESA’08. Also this workshop Theory: Asleep at the Switch to Many-Core
– Parallel cache-obliviousness too strict a goal Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Challenge: Theory of cache-oblivious algorithms falls apart once introduce parallelism: Good performance for any M & B on 2 levels DOES NOT imply good performance at all levels of hierarchy Key reason: Caches not fully shared L2 Cache Shared L2 Cache CPU2 L1 CPU1 CPU3 What’s good for CPU1 is often bad for CPU2 & CPU3 e.g., all want to write B at ≈ the same time B – Parallel cache-obliviousness too strict a goal Theory: Asleep at the Switch to Many-Core
Scheduling of parallel threads Has LARGE impact on cache performance Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Key new dimension: Scheduling of parallel threads Has LARGE impact on cache performance L2 Cache Shared L2 Cache CPU2 L1 CPU1 CPU3 Can mitigate (but not solve) if can schedule the writes to be far apart in time Recall our problem scenario: all CPUs want to write B at ≈ the same time B Theory: Asleep at the Switch to Many-Core
Existing Parallel Cache Models Slide from Rezaul Chowdhury main memory shared cache (size = C) CPU block transfer (size = B) 1 2 p Parallel Shared-Cache Model: main memory block transfer (size = B) private cache (size = C) CPU 1 2 p Parallel Private-Cache Model: Slide from Rezaul Chowdhury Theory: Asleep at the Switch to Many-Core
Competing Demands of Private and Shared Caches private cache CPU 1 2 p main memory shared cache Shared cache: cores work on the same set of cache blocks Private cache: cores work on disjoint sets of cache blocks Experimental results have shown that on CMP architectures work-stealing, i.e., the state-of-art scheduler for private-cache model, can suffer from excessive shared-cache misses parallel depth first, i.e., the best scheduler for shared-cache model, can incur excessive private-cache misses Slide from Rezaul Chowdhury Theory: Asleep at the Switch to Many-Core
Private vs. Shared Caches Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Private vs. Shared Caches Parallel all-shared hierarchy: + Provably good cache performance for cache-oblivious algs 3-level multi-core model: insights on private vs. shared + Designed new scheduler with provably good cache performance for class of divide-and-conquer algorithms [Blelloch08] Guy E. Blelloch, Rezaul A. Chowdhury, Phillip B. Gibbons, Vijaya Ramachandran, Shimin Chen, Michael Kozuch, SODA’08 L2 Cache Shared L2 Cache CPU2 L1 CPU1 CPU3 – Results require exposing working set size for each recursive subproblem Theory: Asleep at the Switch to Many-Core
Design low-depth cache-oblivious algorithm Low depth D Good miss bound Hi-Spade: Hierarchy-savvy Parallel Algorithm Design Parallel Tree of Caches … Guy E. Blelloch, Phillip B. Gibbons, Harsha Vardhan Simhadri, CMU tech report 2009 Approach: [Blelloch09] Design low-depth cache-oblivious algorithm Thrm: for each level i, only O(M P D/ B ) misses more than the sequential schedule i i Low depth D Good miss bound Theory: Asleep at the Switch to Many-Core
Five Areas in Which Theory Can (Should) Have an Important Impact Parallel Thinking Memory Hierarchy Asymmetry/Heterogeniety Concurrency Primitives Power Theory: Asleep at the Switch to Many-Core