Download presentation
Presentation is loading. Please wait.
1
Buffer Overflow/Caches CSE 351 Winter 2017
Alt text: I looked at some of the data dumps from vulnerable sites, and it was ... bad. I saw s, passwords, password hints. SSL keys and session cookies. Important servers brimming with visitor IPs. Attack ships on fire off the shoulder of Orion, c-beams glittering in the dark near the Tannhäuser Gate. I should probably patch OpenSSL.
2
Administrivia Lab 3 out, now due on Mon, Feb 20th Midterm?
Enjoy the extension! Midterm?
3
x86-64 Linux Memory Layout Stack Heap Data Code / Shared Libraries
not drawn to scale x86-64 Linux Memory Layout 0x00007FFFFFFFFFFF Stack Stack Runtime stack (8MB limit) for local vars Heap Dynamically allocated as needed malloc(), calloc(), new, … Data Statically allocated data Read-only: string literals Read/write: global arrays and variables Code / Shared Libraries Executable machine instructions Read-only 8MB Heap Shared Libraries Heap Data Instructions Hex Address 0x400000 0x000000
4
Reminder: x86-64/Linux Stack Frame
Higher Addresses Caller’s Stack Frame Arguments (if > 6 args) for this call Return address Pushed by call instruction Current/ Callee Stack Frame Old frame pointer (optional) Saved register context (when reusing registers) Local variables (if can’t be kept in registers) “Argument build” area (If callee needs to call another function -parameters for function about to call, if needed) Caller Frame Arguments 7+ Frame pointer %rbp Return Addr Old %rbp (Optional) Saved Registers + Local Variables Argument Build (Optional) Stack pointer %rsp Lower Addresses
5
Memory Allocation Example
not drawn to scale Memory Allocation Example Stack char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; int useless() { return 0; } int main() { void *p1, *p2, *p3, *p4; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } Heap big_array, huge_array, global: data int local, Shared Libraries Heap Data Instructions Where does everything go?
6
Memory Allocation Example
not drawn to scale Memory Allocation Example Stack char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; int useless() { return 0; } int main() { void *p1, *p2, *p3, *p4; int local = 0; p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ /* Some print statements ... */ } Heap big_array, huge_array, global: data int local, Shared Libraries Heap Data Instructions Where does everything go?
7
Buffer overflows Buffer overflows are possible because C does not check array boundaries Buffer overflows are dangerous because buffers for user input are often stored on the stack Specific topics: Address space layout (more details!) Input buffers on the stack Overflowing buffers and injecting code Defenses against buffer overflows
8
Internet Worm These characteristics of the traditional Linux memory layout provide opportunities for malicious programs Stack grows “backwards” in memory Data and instructions both stored in the same memory November, 1988 Internet Worm attacks thousands of Internet hosts. How did it happen? Stack buffer overflow exploits!
9
Buffer Overflow in a nutshell
Many Unix/Linux/C functions don’t check argument sizes C does not check array bounds Allows overflowing (writing past the end of) buffers (arrays) Overflows of buffers on the stack overwrite “interesting” data Attackers just choose the right inputs Why a big deal? It is (was?) the #1 technical cause of security vulnerabilities #1 overall cause is social engineering / user ignorance Simplest form Unchecked lengths on string inputs Particularly for bounded character arrays on the stack Sometimes referred to as “stack smashing”
10
String Library Code Implementation of Unix function gets()
What could go wrong in this code? /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; pointer to start of an array same as: *p = c; p++;
11
String Library Code Implementation of Unix function gets()
No way to specify limit on number of characters to read Similar problems with other Unix functions: strcpy: Copies string of arbitrary length to a dst scanf, fscanf, sscanf, when given %s specifier /* Get string from stdin */ char* gets(char* dest) { int c = getchar(); char* p = dest; while (c != EOF && c != '\n') { *p++ = c; c = getchar(); } *p = '\0'; return dest; The man page for gets(3) now says “BUGS: Never use gets().”
12
Vulnerable Buffer Code
/* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } void call_echo() { echo(); } If we overrun the buffer, it’s fine for a while... But then we add one more character, and it seg-faults! Why? (let’s see...) unix> ./buf-nsp Enter string: unix> ./buf-nsp Enter string: Segmentation Fault
13
Buffer Overflow Disassembly
echo: cf <echo>: 4006cf: ec sub $24,%rsp 4006d3: e mov %rsp,%rdi 4006d6: e8 a5 ff ff ff callq <gets> 4006db: e mov %rsp,%rdi 4006de: e8 3d fe ff ff callq 4006e3: c add $24,%rsp 4006e7: c ret call_echo: 4006e8: ec sub $8,%rsp 4006ec: b mov $0x0,%eax 4006f1: e8 d9 ff ff ff callq 4006cf <echo> 4006f6: c add $8,%rsp 4006fa: c ret return address
14
Stack frame for call_echo
Buffer Overflow Stack Before call to gets Stack frame for call_echo Return address (8 bytes) 20 bytes unused [3] [2] [1] [0] /* Echo Line */ void echo() { char buf[4]; /* Way too small! */ gets(buf); puts(buf); } (sorry) addresses go right-to-left, bottom-to-top echo: subq $24, %rsp movq %rsp, %rdi call gets . . . buf ⟵%rsp Note: addresses increasing right-to-left, bottom-to-top
15
Buffer Overflow Example
Before call to gets Stack frame for call_echo 00 40 06 f6 20 bytes unused [3] [2] [1] [0] void echo() { char buf[4]; gets(buf); } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: Little-endian . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp
16
Buffer Overflow Example #1
After call to gets Stack frame for call_echo 00 40 06 f6 32 31 30 39 38 37 36 35 34 33 void echo() { char buf[4]; gets(buf); } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: Null-terminated string . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Enter string: Note: Digit “𝑁” is just 0x3𝑁 in ASCII! Overflowed buffer, but did not corrupt state
17
Buffer Overflow Example #2
After call to gets Stack frame for call_echo 00 40 34 33 32 31 30 39 38 37 36 35 void echo() { char buf[4]; gets(buf); } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Enter string: Segmentation Fault Overflowed buffer and corrupted return pointer
18
Buffer Overflow Example #3
After call to gets Stack frame for call_echo 00 40 06 33 32 31 30 39 38 37 36 35 34 void echo() { char buf[4]; gets(buf); } echo: subq $24, %rsp movq %rsp, %rdi call gets . . . call_echo: . . . 4006f1: callq 4006cf <echo> 4006f6: add $8,%rsp buf ⟵%rsp unix> ./buf-nsp Type a string: Overflowed buffer, corrupted return pointer, but program seems to work!
19
Buffer Overflow Example #3 Explained
After call to gets Stack frame for call_echo 00 40 06 33 32 31 30 39 38 37 36 35 34 register_tm_clones: . . . 400600: mov %rsp,%rbp 400603: mov %rax,%rdx 400606: shr $0x3f,%rdx 40060a: add %rdx,%rax 40060d: sar %rax 400610: jne 400612: pop %rbp 400613: retq register_tm_clones deals with transactional memory, which is intended to make programming with threads simpler (parallelism and synchronization – waaaaaay beyond the scope of this course). buf ⟵%rsp “Returns” to unrelated code. Lots of things happen, but without modifying critical state. Eventually executes retq back to main.
20
Malicious Use of Buffer Overflow: Code Injection Attacks
Stack after call to gets() High Addresses void foo(){ bar(); A:... } foo stack frame return address A A B+24 A (return address) pad int bar() { char buf[64]; gets(buf); ... return ...; } data written by gets() exploit code bar stack frame buf starts here B Low Addresses Input string contains byte representation of executable code Overwrite return address A with address of buffer B When bar() executes ret, will jump to exploit code
21
Exploits Based on Buffer Overflows
Buffer overflow bugs can allow remote machines to execute arbitrary code on victim machines Distressingly common in real programs Programmers keep making the same mistakes Recent measures make these attacks much more difficult Examples across the decades Original “Internet worm” (1988) Still happens!! Heartbleed (2014, affected 17% of servers) Fun: Nintendo hacks Using glitches to rewrite code: FlappyBird in Mario: You will learn some of the tricks in Lab 3 Hopefully to convince you to never leave such holes in your programs!!
22
Example: the original Internet worm (1988)
Exploited a few vulnerabilities to spread Early versions of the finger server (fingerd) used gets() to read the argument sent by the client: finger Worm attacked fingerd server by sending phony argument: finger “exploit-code padding new-return-addr” Exploit code: executed a root shell on the victim machine with a direct TCP connection to the attacker Once on a machine, scanned for other machines to attack Invaded ~6000 computers in hours (10% of the Internet) see June 1989 article in Comm. of the ACM The young author of the worm was prosecuted… TCP: transmission control protocol
23
Heartbleed (2014!) Buffer over-read in OpenSSL “Heartbeat” packet
Open source security library Bug in a small range of versions “Heartbeat” packet Specifies length of message Server echoes it back Library just “trusted” this length Allowed attackers to read contents of memory anywhere they wanted Est. 17% of Internet affected “Catastrophic” Github, Yahoo, Stack Overflow, Amazon AWS, ... By FenixFeather - Own work, CC BY-SA 3.0,
24
Dealing with buffer overflow attacks
Avoid overflow vulnerabilities Employ system-level protections Have compiler use “stack canaries”
25
1) Avoid Overflow Vulnerabilities in Code
/* Echo Line */ void echo() { char buf[4]; /* Way too small! */ fgets(buf, 4, stdin); puts(buf); } Use library routines that limit string lengths fgets instead of gets (2nd argument to fgets sets limit) strncpy instead of strcpy Don’t use scanf with %s conversion specification Use fgets to read the string Or use %ns where n is a suitable integer
26
2) System-Level Protections
High Addresses Randomized stack offsets At start of program, allocate random amount of space on stack Shifts stack addresses for entire program Addresses will vary from one run to another Makes it difficult for hacker to predict beginning of inserted code Example: Code from Slide 6 executed 5 times; address of variable local = 0x7ffe4d3be87c 0x7fff75a4f9fc 0x7ffeadb7c80c 0x7ffeaea2fdac 0x7ffcd452017c Stack repositioned each time program executes main’s stack frame Other functions’ stack frames Random allocation B? exploit code pad Low Addresses
27
2) System-Level Protections
Non-executable code segments In traditional x86, can mark region of memory as either “read-only” or “writeable” Can execute anything readable x86-64 added explicit “execute” permission Stack marked as non-executable Do NOT execute code in Stack, Static Data, or Heap regions Hardware support needed Stack after call to gets() B foo stack frame bar exploit code pad data written by gets() Any attempt to execute this code will fail
28
3) Stack Canaries Basic Idea: place special value (“canary”) on stack just beyond buffer Secret value known only to compiler “After” buffer but before return address Check for corruption before exiting function GCC implementation (now default) -fstack-protector Code back on Slide 13 (buf-nsp) compiled with –fno-stack-protector flag unix>./buf Enter string: unix> ./buf Enter string: *** stack smashing detected ***
29
Summary Avoid overflow vulnerabilities Employ system-level protections
Use library routines that limit string lengths Employ system-level protections Randomized Stack offsets Code on the Stack is not executable Have compiler use “stack canaries”
30
Roadmap C: Java: Assembly language: OS: Machine code: Computer system:
Memory & data Integers & floats Machine code & C x86 assembly Procedures & stacks Arrays & structs Memory & caches Processes Virtual memory Memory allocation Java vs. C C: Java: car *c = malloc(sizeof(car)); c->miles = 100; c->gals = 17; float mpg = get_mpg(c); free(c); Car c = new Car(); c.setMiles(100); c.setGals(17); float mpg = c.getMPG(); Assembly language: get_mpg: pushq %rbp movq %rsp, %rbp ... popq %rbp ret Have been talking about the explicit HW interface: instructions, registers, memory accesses Different impls can have different performance Strictly speaking, performance OS: Machine code: Computer system:
31
How does execution time grow with SIZE?
int array[SIZE]; int sum = 0; for (int i = 0; i < ; i++) { for (int j = 0; j < SIZE; j++) { sum += array[j]; } What should we expect for this experiment? Time Plot SIZE
32
Actual Data Time SIZE Knee in the curve After some size threshold
Memory “wall” Steeper slope = execution time grows faster Fits in cache, doesn’t fit in cache SIZE
33
Problem: Processor-Memory Bottleneck
Processor performance doubled about every 18 months Main Memory Bus latency / bandwidth evolved much slower CPU Reg What is a cycle Only 2 hard problems in computer science: cache invalidation, naming things, and off-by-one errors. Okay, so if one bite takes ~5-10 seconds Going to memory: minutes: going to grocery store Core 2 Duo: Can process at least 256 Bytes/cycle Bandwidth 2 Bytes/cycle Latency cycles (30-60ns) Problem: lots of waiting on memory cycle: single machine step (fixed-time)
34
Problem: Processor-Memory Bottleneck
Processor performance doubled about every 18 months Main Memory Bus latency / bandwidth evolved much slower CPU Reg Cache Have your avocados and eat them too Core 2 Duo: Can process at least 256 Bytes/cycle Core 2 Duo: Bandwidth 2 Bytes/cycle Latency cycles (30-60ns) Solution: caches cycle: single machine step (fixed-time)
35
Cache 💰 Pronunciation: “cash”
We abbreviate this as “$” English: A hidden storage space for provisions, weapons, and/or treasures Computer: Memory with short access time used for the storage of frequently or recently used instructions (i-cache/I$) or data (d-cache/D$) More generally: Used to optimize data transfers between any system elements with different characteristics (network interface cache, I/O cache, etc.)
36
General Cache Mechanics
Smaller, faster, more expensive memory. Caches a subset of the blocks (a.k.a. lines) Cache 7 9 14 3 Data is copied in block-sized transfer units Note: these are not bytes: blocks Larger, slower, cheaper memory. Viewed as partitioned into “blocks” or “lines” Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
37
General Cache Concepts: Hit
Request: 14 Data in block b is needed Block b is in cache: Hit! Cache 7 9 14 14 3 Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
38
General Cache Concepts: Miss
Request: 12 Data in block b is needed Block b is not in cache: Miss! Cache 7 9 12 14 3 Block b is fetched from memory 12 Request: 12 Block b is stored in cache Placement policy: determines where b goes Replacement policy: determines which block gets evicted (victim) Memory 1 2 3 4 5 6 7 8 9 10 11 12 12 13 14 15
39
Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently
40
Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: Recently referenced items are likely to be referenced again in the near future block Analogy: took a bite of sandwich, probably going to take another soon
41
Why Caches Work Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: Recently referenced items are likely to be referenced again in the near future Spatial locality: Items with nearby addresses tend to be referenced close together in time How do caches take advantage of this? block Analogy: took a bite of sandwich, probably going to take a bite out of other half of sandwich (as opposed to a new sandwich) block
42
Example: Any Locality? Data? Instructions?
sum = 0; for (i = 0; i < n; i++) { sum += a[i]; } return sum; Data: Temporal: sum referenced in each iteration Spatial: array a[] accessed in stride-1 pattern Instructions: Temporal: cycle through loop repeatedly Spatial: reference instructions in sequence
43
Locality Example #1 int sum_array_rows(int a[M][N]) {
int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; }
44
Locality Example #1 M = 3, N=4 Layout in Memory
int sum_array_rows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } a[0][0] a[0][1] a[0][2] a[0][3] a[1][0] a[1][1] a[1][2] a[1][3] a[2][0] a[2][1] a[2][2] a[2][3] Access Pattern: stride = ? 1) a[0][0] 2) a[0][1] 3) a[0][2] 4) a[0][3] 5) a[1][0] 6) a[1][1] 7) a[1][2] 8) a[1][3] 9) a[2][0] 10) a[2][1] 11) a[2][2] 12) a[2][3] Layout in Memory a [0] [0] a [0] [1] a [0] [2] a [0] [3] a [1] [0] a [1] [1] a [1] [2] a [1] [3] a [2] [0] a [2] [1] a [2] [2] a [2] [3] 76 92 108 Note: 76 is just one possible starting address of array a
45
Locality Example #2 int sum_array_cols(int a[M][N]) {
int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; }
46
Locality Example #2 M = 3, N=4 Layout in Memory
int sum_array_cols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } a[0][0] a[0][1] a[0][2] a[0][3] a[1][0] a[1][1] a[1][2] a[1][3] a[2][0] a[2][1] a[2][2] a[2][3] Access Pattern: stride = ? 1) a[0][0] 2) a[1][0] 3) a[2][0] 4) a[0][1] 5) a[1][1] 6) a[2][1] 7) a[0][2] 8) a[1][2] 9) a[2][2] 10) a[0][3] 11) a[1][3] 12) a[2][3] Layout in Memory a [0] [0] a [0] [1] a [0] [2] a [0] [3] a [1] [0] a [1] [1] a [1] [2] a [1] [3] a [2] [0] a [2] [1] a [2] [2] a [2] [3] 76 92 108
47
Locality Example #3 What is wrong with this code? How can it be fixed?
int sum_array_3D(int a[M][N][L]) { int i, j, k, sum = 0; for (i = 0; i < N; i++) for (j = 0; j < L; j++) for (k = 0; k < M; k++) sum += a[k][i][j]; return sum; } What is wrong with this code? How can it be fixed? a[2][0][0] a[2][0][1] a[2][0][2] a[2][0][3] a[2][1][0] a[2][1][1] a[2][1][2] a[2][1][3] a[2][2][0] a[2][2][1] a[2][2][2] a[2][2][3] a[1][0][0] a[1][0][1] a[1][0][2] a[1][0][3] a[1][1][0] a[1][1][1] a[1][1][2] a[1][1][3] a[1][2][0] a[1][2][1] a[1][2][2] a[1][2][3] a[0][0][0] a[0][0][1] a[0][0][2] a[0][0][3] a[0][1][0] a[0][1][1] a[0][1][2] a[0][1][3] a[0][2][0] a[0][2][1] a[0][2][2] a[0][2][3] m = 0 m = 1 m = 2
48
⋅ ⋅ ⋅ Locality Example #3 What is wrong with this code?
int sum_array_3D(int a[M][N][L]) { int i, j, k, sum = 0; for (i = 0; i < N; i++) for (j = 0; j < L; j++) for (k = 0; k < M; k++) sum += a[k][i][j]; return sum; } What is wrong with this code? How can it be fixed? Layout in Memory (M = ?, N = 3, L = 4) HIDE until I fix this slide. a [0] [0] [0] a [0] [0] [1] a [0] [0] [2] a [0] [0] [3] a [0] [1] [0] a [0] [1] [1] a [0] [1] [2] a [0] [1] [3] a [0] [2] [0] a [0] [2] [1] a [0] [2] [2] a [0] [2] [3] a [1] [0] [0] a [1] [0] [1] a [1] [0] [2] a [1] [0] [3] a [1] [1] [0] a [1] [1] [1] a [1] [1] [2] a [1] [1] [3] a [1] [2] [0] a [1] [2] [1] a [1] [2] [2] a [1] [2] [3] ⋅ ⋅ ⋅ 76 92 108 124 140 156 172
49
Cache Performance Metrics
Huge difference between a cache hit and a cache miss Could be 100x speed difference between accessing cache and main memory (measured in clock cycles) Miss Rate (MR) Fraction of memory references not found in cache (misses / accesses) = 1 - Hit Rate Hit Time (HT) Time to deliver a block in the cache to the processor Includes time to determine whether the block is in the cache Miss Penalty (MP) Additional time required because of a miss
50
Cache Performance Two things hurt the performance of a cache:
Miss rate and miss penalty Average Memory Access Time (AMAT): average time to access memory considering both hits and misses AMAT = Hit time + Miss rate × Miss penalty (abbreviated AMAT = HT + MR × MP) 99% hit rate twice as good as 97% hit rate! Assume HT of 1 clock cycle and MP of 100 clock cycles 97%: AMAT = 99%: AMAT =
51
Cache Performance Two things hurt the performance of a cache:
Miss rate and miss penalty Average Memory Access Time (AMAT): average time to access memory considering both hits and misses AMAT = Hit time + Miss rate × Miss penalty (abbreviated AMAT = HT + MR × MP) 99% hit rate twice as good as 97% hit rate! Assume HT of 1 clock cycle and MP of 100 clock cycles 97%: AMAT = 99%: AMAT =
52
Peer Instruction Question
Processor specs: 200 ps clock, MP of 50 clock cycles, MR of 0.02 misses/instruction, and HT of 1 clock cycle AMAT = Which improvement would be best? 190 ps clock MP of 40 clock cycles MR of misses/instruction
53
Can we have more than one cache?
Why would we want to do that? Avoid going to memory! Typical performance numbers: Miss Rate L1 MR = 3-10% L2 MR = Quite small (e.g., < 1%), depending on parameters, etc. Hit Time L1 HT = 4 clock cycles L2 HT = 10 clock cycles Miss Penalty P = cycles for missing in L2 & going to main memory Trend: increasing!
54
Memory Hierarchies Some fundamental and enduring properties of hardware and software systems: Faster storage technologies almost always cost more per byte and have lower capacity The gaps between memory technology speeds are widening True for: registers ↔ cache, cache ↔ DRAM, DRAM ↔ disk, etc. Well-written programs tend to exhibit good locality These properties complement each other beautifully They suggest an approach for organizing memory and storage systems known as a memory hierarchy
55
An Example Memory Hierarchy
<1 ns 5-10 s registers on-chip L1 cache (SRAM) 1 ns Smaller, faster, costlier per byte 5-10 ns off-chip L2 cache (SRAM) 1-2 min 15-30 min Larger, slower, cheaper per byte 100 ns main memory (DRAM) Going to SSD is like walking down to California to get an avocado Disk is like waiting a year for a new crop of avocados Going across the world is like waiting for an avocado tree to grow up from a seed… 150,000 ns SSD 31 days local secondary storage (local disks) 10,000,000 ns (10 ms) Disk 66 months = 1.3 years 1-150 ms remote secondary storage (distributed file systems, web servers) years
56
An Example Memory Hierarchy
registers CPU registers hold words retrieved from L1 cache on-chip L1 cache (SRAM) Smaller, faster, costlier per byte L1 cache holds cache lines retrieved from L2 cache off-chip L2 cache (SRAM) L2 cache holds cache lines retrieved from main memory main memory (DRAM) Larger, slower, cheaper per byte Main memory holds disk blocks retrieved from local disks Implemented with different technologies! disks, solid-state DRAM: little wells that drain slowly, refreshed periodically SRAM: flip-flops, little logic gates that feedback with each other, faster but uses more power! That’s why they’re different sizes and costs local secondary storage (local disks) Local disks hold files retrieved from disks on remote network servers remote secondary storage (distributed file systems, web servers)
57
An Example Memory Hierarchy
explicitly program-controlled (e.g. refer to exactly %rax, %rbx) registers on-chip L1 cache (SRAM) Smaller, faster, costlier per byte program sees “memory”; hardware manages caching transparently off-chip L2 cache (SRAM) main memory (DRAM) Larger, slower, cheaper per byte So, why haven’t we seen caches before now in this class? Because they’re designed to be architecturally transparent! local secondary storage (local disks) remote secondary storage (distributed file systems, web servers)
58
Memory Hierarchies Fundamental idea of a memory hierarchy:
For each level k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1 Why do memory hierarchies work? Because of locality, programs tend to access the data at level k more often than they access the data at level k+1 Thus, the storage at level k+1 can be slower, and thus larger and cheaper per bit Big Idea: The memory hierarchy creates a large pool of storage that costs as much as the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top
59
Intel Core i7 Cache Hierarchy
Processor package Core 0 Core 3 Block size: 64 bytes for all caches. L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 11 cycles L3 unified cache: 8 MB, 16-way, Access: cycles Regs Regs L1 d-cache L1 i-cache L1 d-cache L1 i-cache … cat /sys/devices/system/cpu/cpu0/cache/index0/ways_of_associativity L2 unified cache L2 unified cache L3 unified cache (shared by all cores) Main memory
60
Summary Memory Hierarchy Cache Performance
Successively higher levels contain “most used” data from lower levels Exploits temporal and spatial locality Caches are intermediate storage levels used to optimize data transfers between any system elements with different characteristics Cache Performance Ideal case: found in cache (hit) Bad case: not found in cache (miss), search in next level Average Memory Access Time (AMAT) = HT + MR × MP Hurt by Miss Rate and Miss Penalty
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.