CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.

Slides:



Advertisements
Similar presentations
CS 105 Tour of the Black Holes of Computing
Advertisements

Multi-Level Caches Vittorio Zaccaria. Preview What you have seen: Data organization, Associativity, Cache size Policies -- how to manage the data once.
Cache Performance 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
University of Amsterdam Computer Systems – cache characteristics Arnoud Visser 1 Computer Systems Cache characteristics.
Carnegie Mellon 1 Cache Memories : Introduction to Computer Systems 10 th Lecture, Sep. 23, Instructors: Randy Bryant and Dave O’Hallaron.
Cache Memories September 30, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Cache Memories May 5, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance EECS213.
Cache Memories February 24, 2004 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance class13.ppt.
CPSC 312 Cache Memories Slides Source: Bryant Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance CS213.
Systems I Locality and Caching
ECE Dept., University of Toronto
University of Washington Memory and Caches I The Hardware/Software Interface CSE351 Winter 2013.
1 Cache Memories Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
– 1 – , F’02 Caching in a Memory Hierarchy Larger, slower, cheaper storage device at level k+1 is partitioned into blocks.
Lecture 13: Caching EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr. Rozier.
Lecture 20: Locality and Caching CS 2011 Fall 2014, Dr. Rozier.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems (continued) “The class that bytes”
Code and Caches 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with permission.
1 Seoul National University Cache Memories. 2 Seoul National University Cache Memories Cache memory organization and operation Performance impact of caches.
Computer Organization CS224 Fall 2012 Lessons 45 & 46.
1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7.
Cache Memories February 28, 2002 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Reading:
Systems I Cache Organization
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
1 Cache Memory. 2 Outline General concepts 3 ways to organize cache memory Issues with writes Write cache friendly codes Cache mountain Suggested Reading:
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1 Cache Memories. 2 Today Cache memory organization and operation Performance impact of caches  The memory mountain  Rearranging loops to improve spatial.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Advanced Topics: Prefetching ECE 454 Computer Systems Programming Topics: UG Machine Architecture Memory Hierarchy of Multi-Core Architecture Software.
University of Washington Today Midterm topics end here. HW 2 is due Wednesday: great midterm review. Lab 3 is posted. Due next Wednesday (July 31)  Time.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance cache.ppt CS 105 Tour.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
01/26/2009CS267 - Lecture 2 1 Experimental Study of Memory (Membench)‏ Microbenchmark for memory system performance time the following loop (repeat many.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
Vassar College 1 Jason Waterman, CMPU 224: Computer Organization, Fall 2015 Cache Memories CMPU 224: Computer Organization Nov 19 th Fall 2015.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Cache Memories CENG331 - Computer Organization Instructors:
Programming for Cache Performance Topics Impact of caches on performance Blocking Loop reordering.
Carnegie Mellon 1 Cache Memories Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron.
Cache Memories.
CSE 351 Section 9 3/1/12.
Cache Memory and Performance
Optimization III: Cache Memories
Cache Memories CSE 238/2038/2138: Systems Programming
The Hardware/Software Interface CSE351 Winter 2013
Section 7: Memory and Caches
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers
Today How’s Lab 3 going? HW 3 will be out today
Cache Memory Presentation I
CS 105 Tour of the Black Holes of Computing
The Memory Hierarchy : Memory Hierarchy - Cache
Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron
Cache Memories September 30, 2008
Memory Hierarchies.
Cache Memories Topics Cache memory organization Direct mapped caches
“The course that gives CMU its Zip!”
Memory Hierarchy II.
Lecture 22: Cache Hierarchies, Memory
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Cache Performance October 3, 2007
Computer Organization and Assembly Languages Yung-Yu Chuang 2006/01/05
Cache Memories.
Cache Memory and Performance
Overview Problem Solution CPU vs Memory performance imbalance
Writing Cache Friendly Code

Presentation transcript:

CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU

Intel Core i7 Cache Hierarchy Regs L1 d-cache L1 i-cache L2 unified cache Core 0 Regs L1 d-cache L1 i-cache L2 unified cach e Core 3 … L3 unified cache (shared by all cores) Main memory Processor package L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 11 cycles L3 unified cache: 8 MB, 16-way, Access: cycles Block size: 64 bytes for al l caches.

Cache Performance Metrics Miss Rate – Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate – Typical numbers (in percentages): 3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time – Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache – Typical numbers: 1-2 clock cycle for L clock cycles for L2 Miss Penalty – Additional time required because of a miss typically cycles for main memory (Trend: increasing!) AMAT (Average Memory Access Time) – Hit_latency + Miss_rate*Miss_Penalty

Pitfalls in Cache Metrics Is a bigger cache always better than a smaller cache? How much performance improvement is expected by reducing the miss rate by 50%?

Writing Cache Friendly Code Make the common case go fast – Focus on the inner loops of the core functions Minimize the misses in the inner loops – Repeated references to variables are good (temporal locality) – Stride-1 reference patterns are good (spatial locality) Key idea: Our qualitative notion of locality is quantified through our understanding of cache memories.

The Memory Mountain Read throughput (read bandwidth) – Number of bytes read from memory per second (MB/s) Memory mountain: Measured read throughput as a functi on of spatial and temporal locality. – Compact way to characterize memory system performance.

Memory Mountain Test Function /* The test function */ void test(int elems, int stride) { int i, result = 0; volatile int sink; for (i = 0; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ } /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */ }

The Memory Mountain Intel Core i7 32 KB L1 i-cache 32 KB L1 d-cache 256 KB unified L2 cache 8M unified L3 cache All caches on-chip

The Memory Mountain Intel Core i7 32 KB L1 i-cache 32 KB L1 d-cache 256 KB unified L2 cache 8M unified L3 cache All caches on-chip Slopes of spatial locality

The Memory Mountain Intel Core i7 32 KB L1 i-cache 32 KB L1 d-cache 256 KB unified L2 cache 8M unified L3 cache All caches on-chip Slopes of spatial locality Ridges of Temporal locality

Miss Rate Analysis for Matrix Multiply Assume: – Line size = 32B (big enough for four 64-bit words) – Matrix dimension (N) is very large Approximate 1/N as 0.0 – Cache is not even big enough to hold multiple rows Analysis Method: – Look at access pattern of inner loop A k i B k j C i j

Matrix Multiplication Example Description: – Multiply N x N matrices – O(N 3 ) total operations – N reads per source element – N values summed per desti nation but may be able to hold in register /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } Variable sum held in register

Layout of C Arrays in Memory (review) C arrays allocated in row-major order – each row in contiguous memory locations Stepping through columns in one row: – for (i = 0; i < N; i++) sum += a[0][i]; – accesses successive elements – if block size (B) > 4 bytes, exploit spatial locality compulsory miss rate = 4 bytes / B Stepping through rows in one column: – for (i = 0; i < n; i++) sum += a[i][0]; – accesses distant elements – no spatial locality! compulsory miss rate = 1 (i.e. 100%)

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } ABC (i,*) (*,j) (i,j) Inner loop: Column- wise Row-wiseFixed Misses per inner loop iteration : ABC

Matrix Multiplication (jik) /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } ABC (i,*) (*,j) (i,j) Inner loop: Row-wiseColumn- wise Fixed Misses per inner loop iteration: ABC

Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } ABC (i,*) (i,k)(k,*) Inner loop: Row-wise Fixed Misses per inner loop iteration: ABC

Matrix Multiplication (ikj) /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } ABC (i,*) (i,k)(k,*) Inner loop: Row-wise Fixed Misses per inner loop iteration: ABC

Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } ABC (*,j) (k,j) Inner loop: (*,k) Column- wise Column- wise Fixed Misses per inner loop iteration: ABC

Matrix Multiplication (kji) /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } ABC (*,j) (k,j) Inner loop: (*,k) FixedColumn- wise Column- wise Misses per inner loop iteration: ABC

Summary of Matrix Multiplication ijk (& jik): 2 loads, 0 stores misses/iter = 1.25 kij (& ikj): 2 loads, 1 store misses/iter = 0.5 jki (& kji): 2 loads, 1 store misses/iter = 2.0 for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }

Core i7 Matrix Multiply Performance jki / kji ijk / jik kij / ikj

Example: Matrix Multiplication ab i j * c = c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i++) for (j = 0; j < n; j++) for (k = 0; k < n; k++) c[i*n+j] += a[i*n + k]*b[k*n + j]; }

Cache Miss Analysis Assume: – Matrix elements are doubles – Cache block = 8 doubles – Cache size C << n (much smaller than n) First iteration: – n/8 + n = 9n/8 misses – Afterwards in cache: (schematic) * = n * = 8 wide

Cache Miss Analysis Assume: – Matrix elements are doubles – Cache block = 8 doubles – Cache size C << n (much smaller than n) Second iteration: – Again: n/8 + n = 9n/8 misses Total misses: – 9n/8 * n 2 = (9/8) * n 3 n * = 8 wide

Blocked Matrix Multiplication c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i+=B) for (j = 0; j < n; j+=B) for (k = 0; k < n; k+=B) /* B x B mini matrix multiplications */ for (i1 = i; i1 < i+B; i++) for (j1 = j; j1 < j+B; j++) for (k1 = k; k1 < k+B; k++) c[i1*n+j1] += a[i1*n + k1]*b[k1*n + j1]; } ab i1 j1 * c = c + Block size B x B

Cache Miss Analysis Assume: – Cache block = 8 doubles – Cache size C << n (much smaller than n) – Three blocks fit into cache: 3B 2 < C First (block) iteration: – B 2 /8 misses for each block – 2n/B * B 2 /8 = nB/4 (omitting matrix c) – Afterwards in cache (schematic) * = * = Block size B x B n/B blocks

Cache Miss Analysis Assume: – Cache block = 8 doubles – Cache size C << n (much smaller than n) – Three blocks fit into cache: 3B 2 < C Second (block) iteration: – Same as first iteration – 2n/B * B 2 /8 = nB/4 Total misses: – nB/4 * (n/B) 2 = n 3 /(4B) * = Block size B x B n/B blocks

Cache Hierarchy For Multicores High bandwidth and low latency interconnection provide flexibility in cache design Fundamental cache trade-offs – Larger cache  slower cache – More ports (more access bandwidth)  slower cache Private vs. Shared cache in multicores – hit latency vs. miss rate trade-offs

Shared Cache Share cache capacity among processors – High caching efficiency when working sets are not uniform among threads – Reduce overall misses Slow hit time – Capacity is larger than private cache – High bandwidth requirement

Non-Uniform Memory Architectures Different memory latencies by memory location Who decides the location? Who decides thread to core mapping?

Cache Sharing: Private Cache Have been used for single or traditional MPs Fast hit time (compared to shared cache) – Capacity is smaller than shared cache – Need to serve one processor  low bandwidth requirement – Fast hit time = fast miss resolution time  can send miss request early Communication among processors – L2 coherence mechanism handle inter-processor communication No negative interference from other processors P0 P1 P2 P3 L2 Private L2s Memory

Cache Sharing: Shared Cache Slow hit time – Capacity is larger than private cache – High bandwidth requirement – Slow miss resolution time Need high associativity Share cache capacity among processors – High caching efficiency when working sets are not uniform among threads – Reduce overall misses Positive interference by other processors – Prefetching effect for shared data – One proc bring shared data from memory, others can use without misses Possibly faster communication among processors How to implement inter-L1 coherence ? P0 P1 P2 P3 Shared L2 Memory

Cache Sharing Sharing conflicts among cores – High-miss threads and low-miss threads sharing a cache – High-miss threads evicts the data for low-miss threads – High-miss threads: do not use caches effectively anyway – Low-miss threads: negatively affected by high-miss threads – Worst case: one thread can almost block the execution of the other threads How to prevent? – Architectural solutions: static or dynamic partitioning – OS solutions: Page coloring Scheduling