Download presentation
Presentation is loading. Please wait.
1
Thread Criticality Predictors for Dynamic Performance, Power, and Resource Management in Chip Multiprocessors Abhishek Bhattacharjee Margaret Martonosi Princeton University
2
Why Thread Criticality Prediction? D-Cache Miss I-Cache Miss Stall T0 T1T2T3 Threads 1 & 3 are critical Performance degradation, energy inefficiency Sources of variability: algorithm, process variations, thermal emergencies etc. With thread criticality prediction: 1.Task stealing for performance 2.DVFS for energy efficiency 3.Many others … Insts Exec
3
Related Work Instruction criticality [Fields et al., Tune et al. 2001 etc.] Thrifty barrier [Li et al. 2005] Faster cores transitioned into low-power mode based on prediction of barrier stall time DVFS for energy-efficiency at barriers [Liu et al. 2005] Meeting points [Cai et al. 2008] DVFS non-critical threads by tracking loop iterations completion rate across cores (parallel loops) Our Approach: 1.Also handles non-barrier code 2.Works on constant or variable loop iteration size 3.Predicts criticality at any point in time, not just barriers
4
Thread Criticality Prediction Goals Design Goals 1. Accuracy Absolute TCP accuracy Relative TCP accuracy 2. Low-overhead implementation Simple HW (allow SW policies to be built on top) 3. One predictor, many uses Design Decisions 1. Find suitable arch. metric 2. History-based local approach versus thread-comparative approach 3. This paper: TBB, DVFS Other uses: Shared LLC management, SMT and memory priority, …
5
Outline of this Talk Thread Criticality Predictor Design Methodology Identify µarchitectural events impacting thread criticality Introduce basic TCP hardware Thread Criticality Predictor Uses Apply to Intel’s Threading Building Blocks (TBB) Apply for energy-efficiency in barrier-based programs
6
Methodology Evaluations on a range of architectures: high- performance and embedded domains Full-system including OS Detailed power/energy studies using FPGA emulator Infrastructure Domain System Cores Caches GEMS Simulator High-performance, wide-issue, out-of-order 16 core CMP with Solaris 10 4-issue SPARC 32KB L1, 4MB L2 ARM Simulator Embedded, in-order 4-32 core CMP 2-issue ARM 32KB L1, 4MB L2 FPGA Emulator Embedded, in-order 4-core CMP with Linux 2.6 1-issue SPARC 4KB I-Cache, 8KB D-Cache
7
Why not History-Based TCPs? + Info local to core: no communication -- Requires repetitive barrier behavior -- Problem for in-order pipelines: variant IPCs
8
Thread-Comparative Metrics for TCP: Instruction Counts
9
Thread-Comparative Metrics for TCP: L1 D Cache Misses
10
Thread-Comparative Metrics for TCP: L1 I & D Cache Misses
11
Thread-Comparative Metrics for TCP: All L1 and L2 Cache Misses
13
Outline of this Talk Thread Criticality Predictor Design Methodology Identify µarchitectural events impacting thread criticality Introduce basic TCP hardware Thread Criticality Predictor Uses Apply to Intel’s Threading Building Blocks (TBB) Apply for energy-efficiency in barrier-based programs
14
Basic TCP Hardware Core 0Core 1Core 2 L1 I $L1 D $L1 I $L1 D $L1 I $L1 D $ Core 3 L1 I $L1 D $ Shared L2 Cache L2 Controller TCP Hardware Inst 1 Inst 2 Inst 5 Inst 5: L1 D$ Miss! Inst 5 Criticality Counters 0 0 0 0 L1 Cache Miss! 0 1 0 0 Inst 15 Inst 5: Miss Over Inst 15 Inst 20Inst 10 Inst 20: L1 I$ Miss! Inst 20 L1 Cache Miss! 0 1 1 0 Inst 30Inst 20 Inst 20: Miss Over Inst 30Inst 35 Inst 25: L2 $ Miss Inst 25Inst 35 L2 Cache Miss! 0 11 1 0 Per-core Criticality Counters track poorly cached, slow threads Inst 135 Inst 25: Miss Over Inst 125Inst 135 Periodically refresh criticality counters with Interval Bound Register
15
Outline of this Talk Thread Criticality Predictor (TCP) Design Methodology Identify µarchitectural events impacting thread criticality Introduce basic TCP hardware Thread Criticality Predictor Uses Apply to Intel’s Threading Building Blocks (TBB) Apply for energy-efficiency in barrier-based programs
16
TBB Task Stealing & Thread Criticality TBB dynamic scheduler distributes tasks Each thread maintains software queue filled with tasks Empty queue – thread “steals” task from another thread’s queue Approach 1: Default TBB uses random task stealing More failed steals at higher core counts poor performance Approach 2: Occupancy-based task stealing [Contreras, Martonosi, 2008] Steal based on number of items in SW queue Must track and compare max. occupancy counts
17
TCP-Guided TBB Task Stealing Core 0 SW Q0 Shared L2 Cache Core 1 SW Q1 Core 2 SW Q2 Core 3 SW Q3 Criticality Counters Interval Bound Register Task 1 TCP Control Logic Task 0 Task 2 Task 3 Task 4Task 5Task 6 Task 7 Clock: 0Clock: 10 00001 Core 3: L2 Miss 11 Clock: 30Clock: 100 None 145221 Core 2: Steal Req. Scan for max val. Steal from Core 3 Task 7 TCP initiates steals from critical thread Modest message overhead: L2 access latency Scalable: 14-bit criticality counters 114 bytes of storage @ 64 cores Core 3: L1 Miss
18
TCP-Guided TBB Performance TCP access penalized with L2 latency % Perf. Improvement versus Random Task Stealing Avg. Improvement over Random (32 cores) = 21.6 % Avg. Improvement over Occupancy (32 cores) = 13.8 %
19
Outline of this Talk Thread Criticality Predictor Design Methodology Identify µarchitectural events impacting thread criticality Introduce basic TCP hardware Thread Criticality Predictor Uses Apply to Intel’s Threading Building Blocks (TBB) Apply for energy-efficiency in barrier-based programs
20
Adapting TCP for Energy Efficiency in Barrier-Based Programs T0 T1 T2 T3 Insts Exec L2 D$ Miss L2 D$ Over T1 critical, => DVFS T0, T2, T3 Approach: DVFS non-critical threads to eliminate barrier stall time Challenges: Relative criticalities Misprediction costs DVFS overheads
21
TCP for DVFS: Results FPGA platform with 4 cores, 50% fixed leakage cost See paper for details: TCP mispredictions, DVFS overheads etc Average 15% energy savings
22
Conclusions Goal 1: Accuracy Accurate TCPs based on simple cache statistics Goal 2: Low-overhead hardware Scalable per-core criticality counters used TCP in central location where cache info. is already available Goal 3: Versatility TBB improved by 13.8% over best known approach @ 32 cores DVFS used to achieve 15% energy savings Two uses shown, many others possible…
23
Thread Criticality Predictors for Dynamic Performance, Power, and Resource Management in Chip Multiprocessors Abhishek Bhattacharjee Margaret Martonosi Princeton University
24
Backup
25
Benchmarks BenchmarkSuiteProblem SizeTCP App. Studied LUSPLASH-21024x1024 matrix, 64x64 blocksDVFS BarnesSPLASH-265,536 particlesDVFS VolrendSPLASH-2HeadDVFS OceanSPLASH-2514x514 gridDVFS FFTSPLASH-24,194,304 data pointsDVFS CholeskySPLASH-2tk29.ODVFS RadixSPLASH-28,388,608 integersDVFS Water-NsqSPLASH-24096 moleculesDVFS Water-SpSPLASH-24096 moleculesDVFS BlackscholesPARSEC16,385 (simmedium)DVFS, TBB StreamclusterPARSEC8192 pointer per block (simmedium)DVFS, TBB SwaptionsPARSEC32 swaptions, 5000 sims. (simmedium)TBB FluidanimatePARSEC5 frames, 100K particles (simmedium)TBB Use larger, realistic data sets for SPLASH-2 [Bienia et al.’08]
26
How is %Error of Metric Calculated? For one barrier iteration or 10% execution snapshot Track the following info per thread: 1.Num Instructions = I i 2.Num Cache Misses per Inst = Cm i 3.Compute time per thread = CT i Thread 0 I 0, CM 0, CT 0 Thread 1 I 1, CM 1,, CT 1 Thread 2 I 2, CM 2,, CT 2 Thread 3 I 3, CM 3,, CT 3 Suppose Thread 1 is critical Thread 0 Get I 0 / I 1, CM 0 / CM 1 Compare with CT 0 / CT 1 Thread 1 I 1, CM 1,, CT 1 Thread 2 Get I 2 / I 1, CM 2 / CM 1 Compare with CT 2 / CT 1 Thread 3 Get I 3 / I 1, CM 3 / CM 1 Compare with CT 3 / CT 1
27
Thread Comparative Metrics for TCP: Control Flow Changes & TLB Misses Control flow changes captured by I-Cache misses TLB misses similar across cores poor indicator of criticality
28
TBB Random Stealer TBB dynamic scheduler distributes tasks Thread 0 Task 0 SW Q0 Task 2 Task 3 Thread 1 Task 4 SW Q1 Thread 2 Task 5 SW Q2 Thread 3 Task 6 SW Q3 Task 7 Clock: 0Clock: 100 None Steal Task! Random Stealing: SW Q1 False Negative! Backoff… Clock: 150 Retry Steal: SW Q0 Successful! Task 1 Critical Thread (ideal steal victim) Poor performance at higher core counts and load imbalance
29
TBB Stealing with Occupancy-Approach Occupancy-based approach [Contreras, Martonosi 2008] Thread 0 Task 0 SW Q0 Task 2 Task 3 Thread 1 Task 4 SW Q1 Thread 2 SW Q2 Thread 3 Task 6 SW Q3 Task 7 Clock: 0 Occ: 3Occ: 0 Occ: 1 Task 5 Clock: 100 Occ. Steal: SW Q0 None Task 1 Steal successful False negatives eliminated but still not stealing from critical thread
30
Applying TCP to DVFS Assume available frequencies are f 0, 0.85f 0, 0.70f 0, 0.55f 0 Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1100100 CPU 2 CPU 3 Criticality Counters 0 0 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0
31
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1100100 CPU 2 CPU 3 Criticality Counters 10 0 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0
32
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1100100 CPU 2 CPU 3 Criticality Counters T 0.83T 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0 Is a core with T running at f 0 ?
33
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1100100 CPU 2 CPU 3 Criticality Counters T 0.83T 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0 For all cores, find closest SST match to Criticality Counter. SST suggests DVFS for core 1.
34
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1100100 CPU 2 CPU 3 Criticality Counters T 0.83T 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0 SST suggestion goes to SCT. Increment suggested SCT counter, decrement others.
35
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1011000 CPU 2 CPU 3 Criticality Counters T 0.83T 0 0 Curr. DVFS Tags f0f0 f0f0 f0f0 f0f0 Scan for max. counter. Is this corresponding to DVFS?
36
Applying TCP to DVFS Switching Suggestion Table Switching Confidence Table Target f 0 Target 0.85f 0 Target 0.70f 0 Target 0.55f 0 Current f 0 0.85T0.70T0.55T Current 0.85f 0 Current 0.70f 0 Current 0.55 f 0 f0f0 0.85f 0 0.70f 0 0.55f 0 CPU 0 CPU 1011000 CPU 2 CPU 3 Criticality Counters 0 0 0 0 Curr. DVFS Tags f0f0 0.85f 0 f0f0 f0f0 Initiate DVFS on core 1 and refresh criticality counters
37
TCP Parameters for DVFS Carried out a number of experiments to gauge T and bits per SCT counter T = 1024 78.19% accuracy 2 bits SCT 92.68 % accuracy Refer to paper for details…
38
Handling DVFS Transition Overheads
39
TCP vs Meeting Points Meeting points unsuitable for extremely irregular parallel program behavior Example: LU from Splash 2 A kk Pseudocode for a thread for all k from 0 to N-1 if I own A kk, factorize it BARRIER for all my blocks A kj in pivot row A kj A kj * A kk -1 BARRIER for all my blocks A ij in active interior of matrix A ij A ij -- A ik * A kj end for
40
TCP vs Meeting Point ctd…
41
TCP Stability in Out-of-Order and In- Order Architectures In-order architectures see highly variable IPCs Thread criticality through instruction windows may be different from overall criticality through barrier run Experiment: see how IPCs change over 5000 cycle windows and compare thread criticalities against barrier criticality
42
Comparison to Other Works Thread Motion Could use TCP to trigger TM instead of using DVFS for energy-efficiency in barrier-based programs TM already shown to successfully use a last-level cache miss-driven approach Temperature-constrained Power Control TCP can be used as performance proxy instead of MIPS to guide power allocation of controller Could be used to guide DVFS of programs under temperature constraints
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.