Buffer Overflow/Caches CSE 351 Winter 2017

Slides:



Advertisements
Similar presentations
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Advertisements

CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Virtual Memory Topics Virtual Memory Access Page Table, TLB Programming for locality Memory Mountain Revisited.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Systems I Locality and Caching
University of Washington CSE 351 : The Hardware/Software Interface Section 5 Structs as parameters, buffer overflows, and lab 3.
ECE Dept., University of Toronto
Security Exploiting Overflows. Introduction r See the following link for more info: operating-systems-and-applications-in-
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
University of Washington Today Memory layout Buffer overflow, worms, and viruses 1.
University of Washington Memory and Caches I The Hardware/Software Interface CSE351 Winter 2013.
University of Washington Today Happy Monday! HW2 due, how is Lab 3 going? Today we’ll go over:  Address space layout  Input buffers on the stack  Overflowing.
University of Washington Today Finished up virtual memory On to memory allocation Lab 3 grades up HW 4 up later today. Lab 5 out (this afternoon): time.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Machine-Level Programming V: Buffer overflow Slides.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
University of Washington Today Midterm topics end here. HW 2 is due Wednesday: great midterm review. Lab 3 is posted. Due next Wednesday (July 31)  Time.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Buffer Overflow Buffer overflows are possible because C doesn’t check array boundaries Buffer overflows are dangerous because buffers for user input are.
Machine-Level Programming V: Buffer overflow
Instructor: Fatma CORUT ERGİN
CSE 351 Section 9 3/1/12.
Cache Memory and Performance
Memory COMPUTER ARCHITECTURE
The Hardware/Software Interface CSE351 Winter 2013
Machine-Level Programming V: Miscellaneous Topics
Final exam: Wednesday, March 20, 2:30pm
Optimization III: Cache Memories
Cache Memories CSE 238/2038/2138: Systems Programming
Roadmap C: Java: Assembly language: OS: Machine code: Computer system:
Machine-Level Programming V: Unions and Memory layout
How will execution time grow with SIZE?
Machine-Level Programming V: Miscellaneous Topics
Machine-Level Programming V: Miscellaneous Topics
Today How’s Lab 3 going? HW 3 will be out today
CS 105 Tour of the Black Holes of Computing
Cache Memory Presentation I
Caches II CSE 351 Spring 2017 Instructor: Ruth Anderson
The Hardware/Software Interface CSE351 Winter 2013
Memory hierarchy.
ReCap Random-Access Memory (RAM) Nonvolatile Memory
Buffer overflows Buffer overflows are possible because C does not check array boundaries Buffer overflows are dangerous because buffers for user input.
Cache Memories September 30, 2008
Introduction to Computer Systems
Roadmap C: Java: Assembly language: OS: Machine code: Computer system:
CS 105 Tour of the Black Holes of Computing
Buffer Overflows CSE 351 Autumn 2016
CSE 351 Section 10 The END…Almost 3/7/12
ECE Dept., University of Toronto
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Adapted from slides by Sally McKee Cornell University
Machine-Level Programming V: Advanced Topics
Caches II CSE 351 Winter 2018 Instructor: Mark Wyse
Caches I CSE 351 Winter 2018 Instructor: Mark Wyse
Caches I CSE 351 Autumn 2017 Instructor: Justin Hsia
Feb 11 Announcements Memory hierarchies! How’s Lab 3 going?
Roadmap C: Java: Assembly language: OS: Machine code: Computer system:
Caches I CSE 351 Winter 2018 Instructor: Mark Wyse
Buffer Overflows CSE 351 Autumn 2018
CSE 153 Design of Operating Systems Winter 2018
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Instructors: Majd Sakr and Khaled Harras
Caches I CSE 351 Autumn 2018 Instructors: Max Willsey Luis Ceze
Cache Memory and Performance
Buffer Overflows CSE 351 Spring 2019
Caches I CSE 351 Spring 2019 Instructor: Ruth Anderson
Presentation transcript:

Buffer Overflow/Caches CSE 351 Winter 2017 Alt text: I looked at some of the data dumps from vulnerable sites, and it was ... bad. I saw emails, passwords, password hints. SSL keys and session cookies. Important servers brimming with visitor IPs. Attack ships on fire off the shoulder of Orion, c-beams glittering in the dark near the Tannhäuser Gate. I should probably patch OpenSSL. http://xkcd.com/1353/

Administrivia Lab 3 out, now due on Mon, Feb 20th Midterm?  Enjoy the extension! Midterm? 

x86-64 Linux Memory Layout Stack Heap Data Code / Shared Libraries not drawn to scale x86-64 Linux Memory Layout 0x00007FFFFFFFFFFF Stack Stack Runtime stack (8MB limit) for local vars Heap Dynamically allocated as needed malloc(), calloc(), new, … Data Statically allocated data Read-only: string literals Read/write: global arrays and variables Code / Shared Libraries Executable machine instructions Read-only 8MB Heap Shared Libraries Heap Data Instructions Hex Address 0x400000 0x000000

Reminder: x86-64/Linux Stack Frame Higher Addresses Caller’s Stack Frame Arguments (if > 6 args) for this call Return address Pushed by call instruction Current/ Callee Stack Frame Old frame pointer (optional) Saved register context (when reusing registers) Local variables (if can’t be kept in registers) “Argument build” area (If callee needs to call another function -parameters for function about to call, if needed) Caller Frame Arguments 7+ Frame pointer %rbp Return Addr Old %rbp (Optional) Saved Registers + Local Variables Argument Build (Optional) Stack pointer %rsp Lower Addresses

Memory Allocation Example not drawn to scale Memory Allocation Example Stack char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; int useless() { return 0; } int main() { void *p1, *p2, *p3, *p4; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } Heap big_array, huge_array, global: data int local, Shared Libraries Heap Data Instructions Where does everything go?

Memory Allocation Example not drawn to scale Memory Allocation Example Stack char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; int useless() { return 0; } int main() { void *p1, *p2, *p3, *p4; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } Heap big_array, huge_array, global: data int local, Shared Libraries Heap Data Instructions Where does everything go?

Buffer overflows Buffer overflows are possible because C does not check array boundaries Buffer overflows are dangerous because buffers for user input are often stored on the stack Specific topics: Address space layout (more details!) Input buffers on the stack Overflowing buffers and injecting code Defenses against buffer overflows

Internet Worm These characteristics of the traditional Linux memory layout provide opportunities for malicious programs Stack grows “backwards” in memory Data and instructions both stored in the same memory November, 1988 Internet Worm attacks thousands of Internet hosts. How did it happen? Stack buffer overflow exploits! http://en.wikipedia.org/wiki/Morris_worm

Buffer Overflow in a nutshell Many Unix/Linux/C functions don’t check argument sizes C does not check array bounds Allows overflowing (writing past the end of) buffers (arrays) Overflows of buffers on the stack overwrite “interesting” data Attackers just choose the right inputs Why a big deal? It is (was?) the #1 technical cause of security vulnerabilities #1 overall cause is social engineering / user ignorance Simplest form Unchecked lengths on string inputs Particularly for bounded character arrays on the stack Sometimes referred to as “stack smashing”

String Library Code Implementation of Unix function gets() What could go wrong in this code? /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; pointer to start of an array same as: *p = c; p++;

String Library Code Implementation of Unix function gets() No way to specify limit on number of characters to read Similar problems with other Unix functions: strcpy: Copies string of arbitrary length to a dst scanf, fscanf, sscanf, when given %s specifier /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; The man page for gets(3) now says “BUGS: Never use gets().”

Vulnerable Buffer Code /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } void call_echo() { echo(); } If we overrun the buffer, it’s fine for a while... But then we add one more character, and it seg-faults! Why? (let’s see...) unix> ./buf-nsp Enter string: 012345678901234567890123 012345678901234567890123 unix> ./buf-nsp Enter string: 0123456789012345678901234 Segmentation Fault

Buffer Overflow Disassembly echo: 00000000004006cf <echo>: 4006cf: 48 83 ec 18 sub $24,%rsp 4006d3: 48 89 e7 mov %rsp,%rdi 4006d6: e8 a5 ff ff ff callq 400680 <gets> 4006db: 48 89 e7 mov %rsp,%rdi 4006de: e8 3d fe ff ff callq 400520 <puts@plt> 4006e3: 48 83 c4 18 add $24,%rsp 4006e7: c3 ret call_echo: 4006e8: 48 83 ec 08 sub $8,%rsp 4006ec: b8 00 00 00 00 mov $0x0,%eax 4006f1: e8 d9 ff ff ff callq 4006cf <echo> 4006f6: 48 83 c4 08 add $8,%rsp 4006fa: c3 ret return address

Stack frame for call_echo Buffer Overflow Stack Before call to gets Stack frame for call_echo Return address (8 bytes) 20 bytes unused [3] [2] [1] [0] /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } (sorry) addresses go right-to-left, bottom-to-top echo: subq $24, %rsp movq %rsp, %rdi call gets . . . buf ⟵%rsp Note: addresses increasing right-to-left, bottom-to-top

Buffer Overflow Example Before call to gets Stack frame for call_echo 00 40 06 f6 20 bytes unused [3] [2] [1] [0] void echo() { char buf[4]; gets(buf); . . . } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: Little-endian . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp

Buffer Overflow Example #1 After call to gets Stack frame for call_echo 00 40 06 f6 32 31 30 39 38 37 36 35 34 33 void echo() { char buf[4]; gets(buf); . . . } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: Null-terminated string . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Enter string: 01234567890123456789012 01234567890123456789012 Note: Digit “𝑁” is just 0x3𝑁 in ASCII! Overflowed buffer, but did not corrupt state

Buffer Overflow Example #2 After call to gets Stack frame for call_echo 00 40 34 33 32 31 30 39 38 37 36 35 void echo() { char buf[4]; gets(buf); . . . } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Enter string: 0123456789012345678901234 Segmentation Fault Overflowed buffer and corrupted return pointer

Buffer Overflow Example #3 After call to gets Stack frame for call_echo 00 40 06 33 32 31 30 39 38 37 36 35 34 void echo() { char buf[4]; gets(buf); . . . } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Type a string: 012345678901234567890123 012345678901234567890123 Overflowed buffer, corrupted return pointer, but program seems to work!

Buffer Overflow Example #3 Explained After call to gets Stack frame for call_echo 00 40 06 33 32 31 30 39 38 37 36 35 34 register_tm_clones: . . . 400600: mov %rsp,%rbp 400603: mov %rax,%rdx 400606: shr $0x3f,%rdx 40060a: add %rdx,%rax 40060d: sar %rax 400610: jne 400614 400612: pop %rbp 400613: retq register_tm_clones deals with transactional memory, which is intended to make programming with threads simpler (parallelism and synchronization – waaaaaay beyond the scope of this course). buf ⟵%rsp “Returns” to unrelated code. Lots of things happen, but without modifying critical state. Eventually executes retq back to main.

Malicious Use of Buffer Overflow: Code Injection Attacks Stack after call to gets() High Addresses void foo(){ bar(); A:... } foo stack frame return address A A B+24 A (return address) pad int bar() { char buf[64]; gets(buf); ... return ...; } data written by gets() exploit code bar stack frame buf starts here B Low Addresses Input string contains byte representation of executable code Overwrite return address A with address of buffer B When bar() executes ret, will jump to exploit code

Exploits Based on Buffer Overflows Buffer overflow bugs can allow remote machines to execute arbitrary code on victim machines Distressingly common in real programs Programmers keep making the same mistakes  Recent measures make these attacks much more difficult Examples across the decades Original “Internet worm” (1988) Still happens!! Heartbleed (2014, affected 17% of servers) Fun: Nintendo hacks Using glitches to rewrite code: https://www.youtube.com/watch?v=TqK-2jUQBUY FlappyBird in Mario: https://www.youtube.com/watch?v=hB6eY73sLV0 You will learn some of the tricks in Lab 3 Hopefully to convince you to never leave such holes in your programs!!

Example: the original Internet worm (1988) Exploited a few vulnerabilities to spread Early versions of the finger server (fingerd) used gets() to read the argument sent by the client: finger mario@mushroom.com Worm attacked fingerd server by sending phony argument: finger “exploit-code padding new-return-addr” Exploit code: executed a root shell on the victim machine with a direct TCP connection to the attacker Once on a machine, scanned for other machines to attack Invaded ~6000 computers in hours (10% of the Internet) see June 1989 article in Comm. of the ACM The young author of the worm was prosecuted… TCP: transmission control protocol

Heartbleed (2014!) Buffer over-read in OpenSSL “Heartbeat” packet Open source security library Bug in a small range of versions “Heartbeat” packet Specifies length of message Server echoes it back Library just “trusted” this length Allowed attackers to read contents of memory anywhere they wanted Est. 17% of Internet affected “Catastrophic” Github, Yahoo, Stack Overflow, Amazon AWS, ... By FenixFeather - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=32276981

Dealing with buffer overflow attacks Avoid overflow vulnerabilities Employ system-level protections Have compiler use “stack canaries”

1) Avoid Overflow Vulnerabilities in Code /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ fgets(buf, 4, stdin); puts(buf); } Use library routines that limit string lengths fgets instead of gets (2nd argument to fgets sets limit) strncpy instead of strcpy Don’t use scanf with %s conversion specification Use fgets to read the string Or use %ns where n is a suitable integer

2) System-Level Protections High Addresses Randomized stack offsets At start of program, allocate random amount of space on stack Shifts stack addresses for entire program Addresses will vary from one run to another Makes it difficult for hacker to predict beginning of inserted code Example: Code from Slide 6 executed 5 times; address of variable local = 0x7ffe4d3be87c 0x7fff75a4f9fc 0x7ffeadb7c80c 0x7ffeaea2fdac 0x7ffcd452017c Stack repositioned each time program executes main’s stack frame Other functions’ stack frames Random allocation B? exploit code pad Low Addresses

2) System-Level Protections Non-executable code segments In traditional x86, can mark region of memory as either “read-only” or “writeable” Can execute anything readable x86-64 added explicit “execute” permission Stack marked as non-executable Do NOT execute code in Stack, Static Data, or Heap regions Hardware support needed Stack after call to gets() B foo stack frame bar exploit code pad data written by gets() Any attempt to execute this code will fail

3) Stack Canaries Basic Idea: place special value (“canary”) on stack just beyond buffer Secret value known only to compiler “After” buffer but before return address Check for corruption before exiting function GCC implementation (now default) -fstack-protector Code back on Slide 13 (buf-nsp) compiled with –fno-stack-protector flag unix>./buf Enter string: 01234567 01234567 unix> ./buf Enter string: 012345678 *** stack smashing detected ***

Summary Avoid overflow vulnerabilities Employ system-level protections Use library routines that limit string lengths Employ system-level protections Randomized Stack offsets Code on the Stack is not executable Have compiler use “stack canaries”

Roadmap C: Java: Assembly language: OS: Machine code: Computer system: Memory & data Integers & floats Machine code & C x86 assembly Procedures & stacks Arrays & structs Memory & caches Processes Virtual memory Memory allocation Java vs. C C: Java: car *c = malloc(sizeof(car)); c->miles = 100; c->gals = 17; float mpg = get_mpg(c); free(c); Car c = new Car(); c.setMiles(100); c.setGals(17); float mpg = c.getMPG(); Assembly language: get_mpg: pushq %rbp movq %rsp, %rbp ... popq %rbp ret Have been talking about the explicit HW interface: instructions, registers, memory accesses Different impls can have different performance Strictly speaking, performance OS: Machine code: 0111010000011000 100011010000010000000010 1000100111000010 110000011111101000011111 Computer system:

How does execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0; i < 200000; i++) { for (int j = 0; j < SIZE; j++) { sum += array[j]; } What should we expect for this experiment? Time Plot SIZE

Actual Data Time SIZE Knee in the curve After some size threshold Memory “wall” Steeper slope = execution time grows faster Fits in cache, doesn’t fit in cache SIZE

Problem: Processor-Memory Bottleneck Processor performance doubled about every 18 months Main Memory Bus latency / bandwidth evolved much slower CPU Reg What is a cycle Only 2 hard problems in computer science: cache invalidation, naming things, and off-by-one errors. Okay, so if one bite takes ~5-10 seconds Going to memory: 15-30 minutes: going to grocery store Core 2 Duo: Can process at least 256 Bytes/cycle Bandwidth 2 Bytes/cycle Latency 100-200 cycles (30-60ns) Problem: lots of waiting on memory cycle: single machine step (fixed-time)

Problem: Processor-Memory Bottleneck Processor performance doubled about every 18 months Main Memory Bus latency / bandwidth evolved much slower CPU Reg Cache Have your avocados and eat them too Core 2 Duo: Can process at least 256 Bytes/cycle Core 2 Duo: Bandwidth 2 Bytes/cycle Latency 100-200 cycles (30-60ns) Solution: caches cycle: single machine step (fixed-time)

Cache 💰 Pronunciation: “cash” We abbreviate this as “$” English: A hidden storage space for provisions, weapons, and/or treasures Computer: Memory with short access time used for the storage of frequently or recently used instructions (i-cache/I$) or data (d-cache/D$) More generally: Used to optimize data transfers between any system elements with different characteristics (network interface cache, I/O cache, etc.)

General Cache Mechanics Smaller, faster, more expensive memory. Caches a subset of the blocks (a.k.a. lines) Cache 7 9 14 3 Data is copied in block-sized transfer units Note: these are not bytes: blocks Larger, slower, cheaper memory. Viewed as partitioned into “blocks” or “lines” Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

General Cache Concepts: Hit Request: 14 Data in block b is needed Block b is in cache: Hit! Cache 7 9 14 14 3 Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

General Cache Concepts: Miss Request: 12 Data in block b is needed Block b is not in cache: Miss! Cache 7 9 12 14 3 Block b is fetched from memory 12 Request: 12 Block b is stored in cache Placement policy: determines where b goes Replacement policy: determines which block gets evicted (victim) Memory 1 2 3 4 5 6 7 8 9 10 11 12 12 13 14 15

Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently

Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: Recently referenced items are likely to be referenced again in the near future block Analogy: took a bite of sandwich, probably going to take another soon

Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: Recently referenced items are likely to be referenced again in the near future Spatial locality: Items with nearby addresses tend to be referenced close together in time How do caches take advantage of this? block Analogy: took a bite of sandwich, probably going to take a bite out of other half of sandwich (as opposed to a new sandwich) block

Example: Any Locality? Data? Instructions? sum = 0; for (i = 0; i < n; i++) { sum += a[i]; } return sum; Data: Temporal: sum referenced in each iteration Spatial: array a[] accessed in stride-1 pattern Instructions: Temporal: cycle through loop repeatedly Spatial: reference instructions in sequence

Locality Example #1 int sum_array_rows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; }

Locality Example #1 M = 3, N=4 Layout in Memory int sum_array_rows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } a[0][0] a[0][1] a[0][2] a[0][3] a[1][0] a[1][1] a[1][2] a[1][3] a[2][0] a[2][1] a[2][2] a[2][3] Access Pattern: stride = ? 1) a[0][0] 2) a[0][1] 3) a[0][2] 4) a[0][3] 5) a[1][0] 6) a[1][1] 7) a[1][2] 8) a[1][3] 9) a[2][0] 10) a[2][1] 11) a[2][2] 12) a[2][3] Layout in Memory a [0] [0] a [0] [1] a [0] [2] a [0] [3] a [1] [0] a [1] [1] a [1] [2] a [1] [3] a [2] [0] a [2] [1] a [2] [2] a [2] [3] 76 92 108 Note: 76 is just one possible starting address of array a

Locality Example #2 int sum_array_cols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; }

Locality Example #2 M = 3, N=4 Layout in Memory int sum_array_cols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } a[0][0] a[0][1] a[0][2] a[0][3] a[1][0] a[1][1] a[1][2] a[1][3] a[2][0] a[2][1] a[2][2] a[2][3] Access Pattern: stride = ? 1) a[0][0] 2) a[1][0] 3) a[2][0] 4) a[0][1] 5) a[1][1] 6) a[2][1] 7) a[0][2] 8) a[1][2] 9) a[2][2] 10) a[0][3] 11) a[1][3] 12) a[2][3] Layout in Memory a [0] [0] a [0] [1] a [0] [2] a [0] [3] a [1] [0] a [1] [1] a [1] [2] a [1] [3] a [2] [0] a [2] [1] a [2] [2] a [2] [3] 76 92 108

Locality Example #3 What is wrong with this code? How can it be fixed? int sum_array_3D(int a[M][N][L]) { int i, j, k, sum = 0; for (i = 0; i < N; i++) for (j = 0; j < L; j++) for (k = 0; k < M; k++) sum += a[k][i][j]; return sum; } What is wrong with this code? How can it be fixed? a[2][0][0] a[2][0][1] a[2][0][2] a[2][0][3] a[2][1][0] a[2][1][1] a[2][1][2] a[2][1][3] a[2][2][0] a[2][2][1] a[2][2][2] a[2][2][3] a[1][0][0] a[1][0][1] a[1][0][2] a[1][0][3] a[1][1][0] a[1][1][1] a[1][1][2] a[1][1][3] a[1][2][0] a[1][2][1] a[1][2][2] a[1][2][3] a[0][0][0] a[0][0][1] a[0][0][2] a[0][0][3] a[0][1][0] a[0][1][1] a[0][1][2] a[0][1][3] a[0][2][0] a[0][2][1] a[0][2][2] a[0][2][3] m = 0 m = 1 m = 2

⋅ ⋅ ⋅ Locality Example #3 What is wrong with this code? int sum_array_3D(int a[M][N][L]) { int i, j, k, sum = 0; for (i = 0; i < N; i++) for (j = 0; j < L; j++) for (k = 0; k < M; k++) sum += a[k][i][j]; return sum; } What is wrong with this code? How can it be fixed? Layout in Memory (M = ?, N = 3, L = 4) HIDE until I fix this slide. a [0] [0] [0] a [0] [0] [1] a [0] [0] [2] a [0] [0] [3] a [0] [1] [0] a [0] [1] [1] a [0] [1] [2] a [0] [1] [3] a [0] [2] [0] a [0] [2] [1] a [0] [2] [2] a [0] [2] [3] a [1] [0] [0] a [1] [0] [1] a [1] [0] [2] a [1] [0] [3] a [1] [1] [0] a [1] [1] [1] a [1] [1] [2] a [1] [1] [3] a [1] [2] [0] a [1] [2] [1] a [1] [2] [2] a [1] [2] [3] ⋅ ⋅ ⋅ 76 92 108 124 140 156 172

Cache Performance Metrics Huge difference between a cache hit and a cache miss Could be 100x speed difference between accessing cache and main memory (measured in clock cycles) Miss Rate (MR) Fraction of memory references not found in cache (misses / accesses) = 1 - Hit Rate Hit Time (HT) Time to deliver a block in the cache to the processor Includes time to determine whether the block is in the cache Miss Penalty (MP) Additional time required because of a miss

Cache Performance Two things hurt the performance of a cache: Miss rate and miss penalty Average Memory Access Time (AMAT): average time to access memory considering both hits and misses AMAT = Hit time + Miss rate × Miss penalty (abbreviated AMAT = HT + MR × MP) 99% hit rate twice as good as 97% hit rate! Assume HT of 1 clock cycle and MP of 100 clock cycles 97%: AMAT = 99%: AMAT =

Cache Performance Two things hurt the performance of a cache: Miss rate and miss penalty Average Memory Access Time (AMAT): average time to access memory considering both hits and misses AMAT = Hit time + Miss rate × Miss penalty (abbreviated AMAT = HT + MR × MP) 99% hit rate twice as good as 97% hit rate! Assume HT of 1 clock cycle and MP of 100 clock cycles 97%: AMAT = 99%: AMAT =

Peer Instruction Question Processor specs: 200 ps clock, MP of 50 clock cycles, MR of 0.02 misses/instruction, and HT of 1 clock cycle AMAT = Which improvement would be best? 190 ps clock MP of 40 clock cycles MR of 0.015 misses/instruction

Can we have more than one cache? Why would we want to do that? Avoid going to memory! Typical performance numbers: Miss Rate L1 MR = 3-10% L2 MR = Quite small (e.g., < 1%), depending on parameters, etc. Hit Time L1 HT = 4 clock cycles L2 HT = 10 clock cycles Miss Penalty P = 50-200 cycles for missing in L2 & going to main memory Trend: increasing!

Memory Hierarchies Some fundamental and enduring properties of hardware and software systems: Faster storage technologies almost always cost more per byte and have lower capacity The gaps between memory technology speeds are widening True for: registers ↔ cache, cache ↔ DRAM, DRAM ↔ disk, etc. Well-written programs tend to exhibit good locality These properties complement each other beautifully They suggest an approach for organizing memory and storage systems known as a memory hierarchy

An Example Memory Hierarchy <1 ns 5-10 s registers on-chip L1 cache (SRAM) 1 ns Smaller, faster, costlier per byte 5-10 ns off-chip L2 cache (SRAM) 1-2 min 15-30 min Larger, slower, cheaper per byte 100 ns main memory (DRAM) Going to SSD is like walking down to California to get an avocado Disk is like waiting a year for a new crop of avocados Going across the world is like waiting for an avocado tree to grow up from a seed… 150,000 ns SSD 31 days local secondary storage (local disks) 10,000,000 ns (10 ms) Disk 66 months = 1.3 years 1-150 ms remote secondary storage (distributed file systems, web servers) 1 - 15 years

An Example Memory Hierarchy registers CPU registers hold words retrieved from L1 cache on-chip L1 cache (SRAM) Smaller, faster, costlier per byte L1 cache holds cache lines retrieved from L2 cache off-chip L2 cache (SRAM) L2 cache holds cache lines retrieved from main memory main memory (DRAM) Larger, slower, cheaper per byte Main memory holds disk blocks retrieved from local disks Implemented with different technologies! disks, solid-state DRAM: little wells that drain slowly, refreshed periodically SRAM: flip-flops, little logic gates that feedback with each other, faster but uses more power! That’s why they’re different sizes and costs local secondary storage (local disks) Local disks hold files retrieved from disks on remote network servers remote secondary storage (distributed file systems, web servers)

An Example Memory Hierarchy explicitly program-controlled (e.g. refer to exactly %rax, %rbx) registers on-chip L1 cache (SRAM) Smaller, faster, costlier per byte program sees “memory”; hardware manages caching transparently off-chip L2 cache (SRAM) main memory (DRAM) Larger, slower, cheaper per byte So, why haven’t we seen caches before now in this class? Because they’re designed to be architecturally transparent! local secondary storage (local disks) remote secondary storage (distributed file systems, web servers)

Memory Hierarchies Fundamental idea of a memory hierarchy: For each level k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1 Why do memory hierarchies work? Because of locality, programs tend to access the data at level k more often than they access the data at level k+1 Thus, the storage at level k+1 can be slower, and thus larger and cheaper per bit Big Idea: The memory hierarchy creates a large pool of storage that costs as much as the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top

Intel Core i7 Cache Hierarchy Processor package Core 0 Core 3 Block size: 64 bytes for all caches. L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 11 cycles L3 unified cache: 8 MB, 16-way, Access: 30-40 cycles Regs Regs L1 d-cache L1 i-cache L1 d-cache L1 i-cache … cat /sys/devices/system/cpu/cpu0/cache/index0/ways_of_associativity L2 unified cache L2 unified cache L3 unified cache (shared by all cores) Main memory

Summary Memory Hierarchy Cache Performance Successively higher levels contain “most used” data from lower levels Exploits temporal and spatial locality Caches are intermediate storage levels used to optimize data transfers between any system elements with different characteristics Cache Performance Ideal case: found in cache (hit) Bad case: not found in cache (miss), search in next level Average Memory Access Time (AMAT) = HT + MR × MP Hurt by Miss Rate and Miss Penalty