Download presentation
1
Nikos Hardavellas, Northwestern University
Near-Optimal Cache Block Placement with Reactive Nonuniform Cache Architectures Nikos Hardavellas, Northwestern University Team: M. Ferdman, B. Falsafi, A. Ailamaki Northwestern, Carnegie Mellon, EPFL
2
Moore’s Law Is Alive And Well
90nm 90nm transistor (Intel, 2005) Swine Flu A/H1N1 (CDC) 65nm 2007 45nm 2010 32nm 2013 22nm 2016 16nm 2019 Device scaling continues for at least another 10 years © Hardavellas
3
Moore’s Law Is Alive And Well
Good Days Ended Nov. 2002 Moore’s Law Is Alive And Well [Yelick09] “New” Moore’s Law: 2x cores with every generation On-chip cache grows commensurately to supply all cores with data © Hardavellas
4
Larger Caches Are Slower Caches
slow access large caches Increasing access latency forces caches to be distributed © Hardavellas
5
Balance cache slice access with network latency
Cache design trends As caches become bigger, they get slower: Split cache into smaller “slices”: Balance cache slice access with network latency © Hardavellas
6
Modern Caches: Distributed
core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Split cache into “slices”, distribute across die © Hardavellas
7
Data Placement Determines Performance
core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core cache slice L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Goal: place data on chip close to where they are used © Hardavellas
8
Our proposal: R-NUCA Reactive Nonuniform Cache Architecture
Data may exhibit arbitrarily complex behaviors ...but few that matter! Learn the behaviors at run time & exploit their characteristics Make the common case fast, the rare case correct Resolve conflicting requirements © Hardavellas
9
Reactive Nonuniform Cache Architecture
[Hardavellas et al, ISCA 2009] [Hardavellas et al, IEEE-Micro Top Picks 2010] Cache accesses can be classified at run-time Each class amenable to different placement Per-class block placement Simple, scalable, transparent No need for HW coherence mechanisms at LLC Up to 32% speedup (17% on average) -5% on avg. from an ideal cache organization Rotational Interleaving Data replication and fast single-probe lookup © Hardavellas
10
Outline Introduction Why do Cache Accesses Matter?
Access Classification and Block Placement Reactive NUCA Mechanisms Evaluation Conclusion © Hardavellas
11
Cache accesses dominate execution
[Hardavellas et al, CIDR 2007] Lower is better 4-core CMP DSS: TPCH/DB2 1GB database Ideal Bottleneck shifts from memory to L2-hit stalls © Hardavellas
12
We lose half the potential throughput
How much do we lose? 4-core CMP DSS: TPCH/DB2 1GB database Higher is better We lose half the potential throughput © Hardavellas
13
Outline Introduction Why do Cache Accesses Matter?
Access Classification and Block Placement Reactive NUCA Mechanisms Evaluation Conclusion © Hardavellas
14
Terminology: Data Types
core core core core core Read or Write Read Read Read Write L2 L2 L2 Private Shared Read-Only Shared Read-Write © Hardavellas
15
Maximum capacity, but slow access (30+ cycles)
Distributed shared L2 core core core core core core core core address mod <#slices> L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core Unique location for any block (private or shared) L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Maximum capacity, but slow access (30+ cycles) © Hardavellas
16
Fast access to core-private data
Distributed private L2 core core core core core core core core On every access allocate data at local L2 slice L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core Private data: allocate at local L2 slice L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Fast access to core-private data © Hardavellas
17
Distributed private L2: shared-RO access
core core core core core core core core On every access allocate data at local L2 slice L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core Shared read-only data: replicate across L2 slices L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Wastes capacity due to replication © Hardavellas
18
Distributed private L2: shared-RW access
core core core core core core core core On every access allocate data at local L2 slice L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core Shared read-write data: maintain coherence via indirection (dir) L2 L2 L2 L2 L2 X L2 L2 L2 core core core core core core core core L2 L2 L2 dir L2 L2 L2 L2 Slow for shared read-write Wastes capacity (dir overhead) and bandwidth © Hardavellas
19
Conventional Multi-Core Caches
Shared Private core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 dir L2 L2 L2 Address-interleave blocks High capacity Slow access Each block cached locally Fast access (local) Low capacity (replicas) Coherence: via indirection (distributed directory) We want: high capacity (shared) + fast access (private) © Hardavellas
20
Where to Place the Data? read-write share migrate replicate read-only
Close to where they are used! Accessed by single core: migrate locally Accessed by many cores: replicate (?) If read-only, replication is OK If read-write, coherence a problem Low reuse: evenly distribute across sharers read-write share migrate read-only replicate sharers# © Hardavellas
21
Flexus: Full-system cycle-accurate timing simulation
Methodology Flexus: Full-system cycle-accurate timing simulation [Hardavellas et al, SIGMETRICS-PER 2004 Wenisch et al, IEEE Micro 2006] Workloads OLTP: TPC-C WH IBM DB2 v8 Oracle 10g DSS: TPC-H Qry 6, 8, 13 SPECweb99 on Apache 2.0 Multiprogammed: Spec2K Scientific: em3d Model Parameters Tiled, LLC = L2 Server/Scientific wrkld. 16-cores, 1MB/core Multi-programmed wrkld. 8-cores, 3MB/core OoO, 2GHz, 96-entry ROB Folded 2D-torus 2-cycle router, 1-cycle link 45ns memory © Hardavellas
22
Cache Access Classification Example
Each bubble: cache blocks shared by x cores Size of bubble proportional to % L2 accesses y axis: % blocks in bubble that are read-write % RW Blocks in Bubble © Hardavellas
23
Cache Access Clustering
share (addr-interleave) R/W share migrate % RW Blocks in Bubble % RW Blocks in Bubble replicate R/O sharers# migrate locally Server Apps Scientific/MP Apps replicate Accesses naturally form 3 clusters © Hardavellas
24
Instruction Replication
Instruction working set too large for one cache slice core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Distribute in cluster of neighbors, replicate across © Hardavellas
25
Reactive NUCA in a nutshell
Classify accesses private data: like private scheme (migrate) shared data: like shared scheme (interleave) instructions: controlled replication (middle ground) To place cache blocks, we first need to classify them © Hardavellas
26
Outline Introduction Access Classification and Block Placement
Reactive NUCA Mechanisms Evaluation Conclusion © Hardavellas
27
Classification Granularity
Per-block classification High area/power overhead (cut L2 size by half) High latency (indirection through directory) Per-page classification (utilize OS page table) Persistent structure Core accesses the page table for every access anyway (TLB) Utilize already existing SW/HW structures and events Page classification is accurate (<0.5% error) Classify entire data pages, page table/TLB for bookkeeping © Hardavellas
28
Classification Mechanisms
Instructions classification: all accesses from L1-I (per-block) Data classification: private/shared per-page at TLB miss On 1st access On access by another core Core i core Ld A Ld A core Core j TLB Miss TLB Miss L2 L2 OS OS A: Private to “i” A: Private to “i” A: Shared Bookkeeping through OS page table and TLB © Hardavellas
29
Page Table and TLB Extensions
Core accesses the page table for every access anyway (TLB) Pass information from the “directory” to the core Utilize already existing SW/HW structures and events TLB entry: P/S vpage ppage 1 bit Page table entry: P/S/I L2 id vpage ppage 2 bits log(n) Page granularity allows simple + practical HW © Hardavellas
30
Data Class Bookkeeping and Lookup
private data: place in local L2 slice Page table entry: P L2 id vpage ppage TLB entry: P vpage ppage shared data: place in aggregate L2 (addr interleave) Page table entry: S L2 id vpage ppage TLB entry: S vpage ppage Physical Addr.: tag L2 id cache index offset © Hardavellas
31
Coherence: No Need for HW Mechanisms at LLC
Reactive NUCA placement guarantee Each R/W datum in unique & known location Shared data: addr-interleave Private data: local slice core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Fast access, eliminates HW overhead, SIMPLE © Hardavellas
32
Instructions Lookup: Rotational Interleaving
RID +1 1 2 3 1 2 3 2 3 3 1 1 2 3 1 +log2(k) 1 1 2 2 3 3 1 2 3 2 3 3 RID 1 1 2 3 1 size-4 clusters: local slice + 3 neighbors PC: 0xfa480 Addr each slice caches the same blocks on behalf of any cluster Fast access (nearest-neighbor, simple lookup) Balance access latency with capacity constraints Equal capacity pressure at overlapped slices © Hardavellas
33
Outline Introduction Access Classification and Block Placement
Reactive NUCA Mechanisms Evaluation Conclusion © Hardavellas
34
Evaluation Delivers robust performance across workloads
Shared (S) R-NUCA (R) Ideal (I) Delivers robust performance across workloads Shared: same for Web, DSS; 17% for OLTP, MIX Private: 17% for OLTP, Web, DSS; same for MIX © 2009 Hardavellas
35
Conclusions Data may exhibit arbitrarily complex behaviors
...but few that matter! Learn the behaviors that matter at run time Make the common case fast, the rare case correct Reactive NUCA: near-optimal cache block placement Simple, scalable, low-overhead, transparent, no coherence Robust performance Matches best alternative, or 17% better; up to 32% Near-optimal placement (-5% avg. from ideal) © Hardavellas
36
Thank You! For more information:
N. Hardavellas, M. Ferdman, B. Falsafi and A. Ailamaki. Near-Optimal Cache Block Placement with Reactive Nonuniform Cache Architectures. IEEE Micro Top Picks, Vol. 30(1), pp , January/February 2010. N. Hardavellas, M. Ferdman, B. Falsafi and A. Ailamaki. Reactive NUCA: Near-Optimal Block Placement and Replication in Distributed Caches. ISCA 2009. © Hardavellas
37
BACKUP SLIDES © 2009 Hardavellas
38
Why Are Caches Growing So Large?
Increasing number of cores: cache grows commensurately Fewer but faster cores have the same effect Increasing datasets: faster than Moore’s Law! Power/thermal efficiency: caches are “cool”, cores are “hot” So, its easier to fit more cache in a power budget Limited bandwidth: large cache == more data on chip Off-chip pins are used less frequently © Hardavellas
39
Backup Slides ASR © 2009 Hardavellas
40
ASR vs. R-NUCA Configurations
12.5× 25.0× 5.6× 2.1× 2.2× 38% Core Type In-Order OoO L2 Size (MB) 4 16 Memory 150 500 90 Local L2 12 20 Avg. Shared L2 25 44 22 © 2009 Hardavellas
41
ASR design space search
© Hardavellas
42
Backup Slides Prior Work © 2009 Hardavellas
43
Simple, scalable mechanism for fast access to all data
Prior Work Several proposals for CMP cache management ASR, cooperative caching, victim replication, CMP-NuRapid, D-NUCA ...but suffer from shortcomings complex, high-latency lookup/coherence don’t scale lower effective cache capacity optimize only for subset of accesses We need: Simple, scalable mechanism for fast access to all data © Hardavellas
44
Shortcomings of prior work
L2-Private Wastes capacity High latency (3 slice accesses + 3 hops on shr.) L2-Shared High latency Cooperative Caching Doesn’t scale (centralized tag structure) CMP-NuRapid High latency (pointer dereference, 3 hops on shr) OS-managed L2 Wastes capacity (migrates all blocks) Spill to neighbors useless (all run same code) © Hardavellas
45
Shortcomings of Prior Work
D-NUCA No practical implementation (lookup?) Victim Replication High latency (like L2-Private) Wastes capacity (home always stores block) Adaptive Selective Replication (ASR) Capacity pressure (replicates at slice granularity) Complex (4 separate HW structures to bias coin) © Hardavellas
46
Classification and Lookup
Backup Slides Classification and Lookup © 2009 Hardavellas
47
Data Classification Timeline
Core i Core j core core Ld A Ld A L2 L2 TLB Miss inval A TLBi TLB Miss Core k evict A reply A allocate A core L2 allocate A OS P i vpage ppage S x vpage ppage i≠j Fast & simple lookup for data © 2009 Hardavellas
48
Misclassifications at Page Granularity
Accesses from pages with multiple access types Access misclassifications A page may service multiple access types But, one type always dominates accesses Classification at page granularity is accurate © Hardavellas
49
Backup Slides Placement © 2009 Hardavellas
50
Private Data Placement
Spill to neighbors if working set too large? NO!!! Each core runs similar threads Store in local L2 slice (like in private cache) © Hardavellas
51
Private Data Working Set
OLTP: Small per-core work. set (3MB/16 cores = 200KB/core) Web: primary wk. set <6KB/core, remaining <1.5% L2 refs DSS: Policy doesn’t matter much (>100MB work. set, <13% L2 refs very low reuse on private) © Hardavellas
52
Address-interleave in aggregate L2 (like shared cache)
Shared Data Placement Read-write + large working set + low reuse Unlikely to be in local slice for reuse Also, next sharer is random [WMPI’04] Address-interleave in aggregate L2 (like shared cache) © Hardavellas
53
Shared Data Working Set
© Hardavellas
54
Instruction Placement
Working set too large for one slice Slices store private & shared data too! Sufficient capacity with 4 L2 slices Share in clusters of neighbors, replicate across © Hardavellas
55
Instructions Working Set
© Hardavellas
56
Rotational Interleaving
Backup Slides Rotational Interleaving © 2009 Hardavellas
57
Instruction Classification and Lookup
Identification: all accesses from L1-I But, working set too large to fit in one cache slice core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 core core core core core core core core L2 L2 L2 L2 L2 L2 L2 L2 Share within neighbors’ cluster, replicate across © 2009 Hardavellas
58
Rotational Interleaving
RotationalID +1 TileID 1 1 2 2 3 3 4 5 1 6 2 3 7 2 8 3 9 10 1 11 12 2 3 13 14 15 1 +log2(k) 16 17 1 2 18 19 3 20 1 21 22 2 23 3 2 24 3 25 26 1 27 2 28 29 3 30 31 1 Fast access (nearest-neighbor, simple lookup) Equalize capacity pressure at overlapping slices © 2009 Hardavellas
59
Nearest-neighbor size-8 clusters
© Hardavellas
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.