COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses Siddhartha Chatterjee Fall 2000.

Slides:



Advertisements
Similar presentations
Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
EE Architecture of Digital Systems Lecture 3 Cache Memory
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
ENGS 116 Lecture 131 Caches and Virtual Memory Vincent H. Berk October 31 st, 2008 Reading for Today: Sections C.1 – C.3 (Jouppi article) Reading for Monday:
CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 IBM 360 Model 85 (1968) had a cache, which helped it outperform the more complex Model 91 (Tomasulo’s algorithm) Maurice Wilkes published the first paper.
Lec17.1 °Q1: Where can a block be placed in the upper level? (Block placement) °Q2: How is a block found if it is in the upper level? (Block identification)
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
1 Improving on Caches CS #4: Pseudo-Associative Cache Also called column associative Idea –start with a direct mapped cache, then on a miss check.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 4, 2002 Topic: 1. Caches (contd.); 2. Virtual Memory.
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
MBG 1 CIS501, Fall 99 Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
Lecture 12: Design with Genetic Algorithms Memory I
CMSC 611: Advanced Computer Architecture
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory
/ Computer Architecture and Design
CMSC 611: Advanced Computer Architecture
COMP 740: Computer Architecture and Implementation
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
5.2 Eleven Advanced Optimizations of Cache Performance
CPE 631 Lecture 06: Cache Design
CS252 Graduate Computer Architecture Lecture 7 Cache Design (continued) Feb 12, 2002 Prof. David Culler.
CS252 Graduate Computer Architecture Lecture 4 Cache Design
CMSC 611: Advanced Computer Architecture
Lecture 14: Reducing Cache Misses
Lecture 08: Memory Hierarchy Cache Performance
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Memory Hierarchy.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Siddhartha Chatterjee
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Cache Memory Rabi Mahapatra
Cache Performance Improvements
10/18: Lecture Topics Using spatial locality
Presentation transcript:

COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses Siddhartha Chatterjee Fall 2000

Siddhartha Chatterjee2 Cache Performance

Fall 2000Siddhartha Chatterjee3 Miss Penalty Block Size Miss Rate Exploits spatial locality Fewer blocks: compromises temporal locality Block Size Increased Miss Penalty & Miss Rate Average Access Time Block Size Block Size Tradeoff v In general, larger block size take advantage of spatial locality, BUT: âLarger block size means larger miss penalty Takes longer time to fill up the block âIf block size is too big relative to cache size, miss rate will go up Too few cache blocks v Average Access Time âHit Time + Miss Penalty x Miss Rate

Fall 2000Siddhartha Chatterjee4 Sources of Cache Misses v Compulsory (cold start or process migration, first reference): first access to a block â“Cold” fact of life: not a whole lot you can do about it v Conflict/Collision/Interference âMultiple memory locations mapped to the same cache location âSolution 1: Increase cache size âSolution 2: Increase associativity v Capacity âCache cannot contain all blocks access by the program âSolution 1: Increase cache size âSolution 2: Restructure program v Coherence/Invalidation âOther process (e.g., I/O) updates memory

Fall 2000Siddhartha Chatterjee5 The 3C Model of Cache Misses v Based on comparison with another cache âCompulsory—The first access to a block is not in the cache, so the block must be brought into the cache. These are also called cold start misses or first reference misses. (Misses in Infinite Cache) âCapacity—If the cache cannot contain all the blocks needed during execution of a program (its working set), capacity misses will occur due to blocks being discarded and later retrieved. (Misses in fully associative size X Cache) âConflict—If the block-placement strategy is set-associative or direct mapped, conflict misses (in addition to compulsory and capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. These are also called collision misses or interference misses. (Misses in A-way associative size X Cache but not in fully associative size X Cache)

Fall 2000Siddhartha Chatterjee6 Sources of Cache Misses Direct MappedN-way Set AssociativeFully Associative Compulsory Miss Cache Size Capacity Miss Invalidation Miss BigMediumSmall If you are going to run “billions” of instruction, compulsory misses are insignificant. Same Conflict MissHighMediumZero Low(er)MediumHigh Same

Fall 2000Siddhartha Chatterjee7 3Cs Absolute Miss Rate Conflict

Fall 2000Siddhartha Chatterjee8 3Cs Relative Miss Rate Conflict

Fall 2000Siddhartha Chatterjee9 How to Improve Cache Performance v Latency âReduce miss rate (Section 5.3 of HP2) âReduce miss penalty (Section 5.4 of HP2) âReduce hit time (Section 5.5 of HP2) v Bandwidth âIncrease hit bandwidth âIncrease miss bandwidth

Fall 2000Siddhartha Chatterjee10 1. Reduce Misses via Larger Block Size

Fall 2000Siddhartha Chatterjee11 2. Reduce Misses via Higher Associativity v 2:1 Cache Rule âMiss Rate DM cache size N  Miss Rate FA cache size N/2 âNot merely empirical Theoretical justification in Sleator and Tarjan, “Amortized efficiency of list update and paging rules”, CACM, 28(2): ,1985 v Beware: Execution time is only final measure! âWill clock cycle time increase? âHill [1988] suggested hit time external cache +10%, internal + 2% for 2-way vs. 1-way

Fall 2000Siddhartha Chatterjee12 Example Average Memory Access Time vs. Miss Rate v Example: assume clock cycle time is 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. clock cycle time of direct mapped (Red means A.M.A.T. not improved by more associativity)

Fall 2000Siddhartha Chatterjee13 3. Reduce Conflict Misses via Victim Cache v How to combine fast hit time of direct mapped yet still avoid conflict misses v Add small highly associative buffer to hold data discarded from cache v Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache TAG DATA ? TAG DATA ? CPU Mem

Fall 2000Siddhartha Chatterjee14 4. Reduce Conflict Misses via Pseudo-Associativity v How to combine fast hit time of direct mapped and have the lower conflict misses of 2-way SA cache v Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) v Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles âBetter for caches not tied directly to processor Hit Time Pseudo Hit Time Miss Penalty Time

Fall 2000Siddhartha Chatterjee15 5. Reduce Misses by Hardware Prefetching of Instruction & Data v Instruction prefetching âAlpha fetches 2 blocks on a miss âExtra block placed in stream buffer âOn miss check stream buffer v Works with data blocks too âJouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% âPalacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches v Prefetching relies on extra memory bandwidth that can be used without penalty

Fall 2000Siddhartha Chatterjee16 6. Reducing Misses by Software Prefetching Data v Data prefetch âLoad data into register (HP PA-RISC loads) binding âCache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v9) non-binding âSpecial prefetching instructions cannot cause faults; a form of speculative execution v Issuing prefetch instructions takes time âIs cost of prefetch issues < savings in reduced misses?

Fall 2000Siddhartha Chatterjee17 7. Reduce Misses by Compiler Optimizations v Instructions âReorder procedures in memory so as to reduce misses âProfiling to look at conflicts âMcFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks v Data âMerging Arrays Improve spatial locality by single array of compound elements vs. 2 arrays âLoop Interchange Change nesting of loops to access data in order stored in memory âLoop Fusion Combine two independent loops that have same looping and some variables overlap âBlocking Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

Fall 2000Siddhartha Chatterjee18 Merging Arrays Example v Reduces conflicts between val and key v Addressing expressions are different /* Before */ int val[SIZE]; int key[SIZE]; /* Before */ int val[SIZE]; int key[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE];

Fall 2000Siddhartha Chatterjee19 Loop Interchange Example v Sequential accesses instead of striding through memory every 100 words /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j];

Fall 2000Siddhartha Chatterjee20 Loop Fusion Example  Two misses per access to a & c vs. one miss per access /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];}

Fall 2000Siddhartha Chatterjee21 Blocking Example v Two Inner Loops: âRead all NxN elements of z[] âRead N elements of 1 row of y[] repeatedly âWrite N elements of 1 row of x[] v Capacity Misses a function of N and Cache Size â3 NxN  no capacity misses; otherwise... v Idea: compute on BxB submatrix that fits /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;} /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;}

Fall 2000Siddhartha Chatterjee22 Blocking Example v Capacity misses go from 2N 3 + N 2 to 2N 3 /B +N 2 v B called Blocking Factor v What happens to conflict misses? /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;} /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;}

Fall 2000Siddhartha Chatterjee23 Reducing Conflict Misses by Blocking v Conflict misses in non-FA caches vs. block size âLam et al [1991] found that a blocking factor of 24 had a fifth the misses vs. 48 despite the fact that both fit in cache

Fall 2000Siddhartha Chatterjee24 Summary of Compiler Optimizations to Reduce Cache Misses

Fall 2000Siddhartha Chatterjee25 1. Reduce Miss Penalty: Read Priority over Write on Miss v Write through with write buffers offer RAW conflicts with main memory reads on cache misses v If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000) v Check write buffer contents before read; if no conflicts, let the memory access continue v Write Back? âRead miss replacing dirty block âNormal: Write dirty block to memory, and then do the read âInstead copy the dirty block to a write buffer, then do the read, and then do the write âCPU stall less since restarts as soon as read completes

Fall 2000Siddhartha Chatterjee26 Valid Bits Subblock Placement to Reduce Miss Penalty v Don’t have to load full block on a miss v Have bits per subblock to indicate valid v (Originally invented to reduce tag storage)

Fall 2000Siddhartha Chatterjee27 3. Early Restart and Critical Word First v Don’t wait for full block to be loaded before restarting CPU âEarly Restart —As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution âCritical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first v Generally useful only in large blocks v Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart

Fall 2000Siddhartha Chatterjee28 4. Non-blocking Caches to reduce stalls on misses v Non-blocking cache or lockup-free cache allows the data cache to continue to supply cache hits during a miss v “Hit under miss” reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU v “Hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses âSignificantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

Fall 2000Siddhartha Chatterjee29 Value of Hit Under Miss for SPEC v FP programs on average: AMAT= > > > 0.26 v Int programs on average: AMAT= > > > 0.19 v 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss Integer Floating Point “Hit under i Misses”

Fall 2000Siddhartha Chatterjee30 5. Miss Penalty Reduction: Second Level Cache L2 Equations AMAT = Hit Time L1 + Miss Rate L1  Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2  Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1  (Hit Time L2 + Miss Rate L2  Miss Penalty L2 ) Definitions: Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ) Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss Rate L1  Miss Rate L2 )

Fall 2000Siddhartha Chatterjee31 Reducing Misses: Which Apply to L2 Cache? v Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

Fall 2000Siddhartha Chatterjee32 L2 cache block size & A.M.A.T. v 32KB L1, 8 byte path to memory

Fall 2000Siddhartha Chatterjee33 Reducing Miss Penalty Summary v Five techniques âRead priority over write on miss âSubblock placement âEarly Restart and Critical Word First on miss âNon-blocking Caches (Hit Under Miss) âSecond Level Cache v Can be applied recursively to Multilevel Caches âDanger is that time to DRAM will grow with multiple levels in between

Fall 2000Siddhartha Chatterjee34 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Fall 2000Siddhartha Chatterjee35 1. Fast Hit Times via Small, Simple Caches v Why Alpha has 8KB Instruction and 8KB data cache + 96KB second level cache v Direct Mapped, on chip v Impact of dynamic scheduling? âAlpha has 64KB 2-way L1 Data and Inst Cache

Fall 2000Siddhartha Chatterjee36 2. Fast Hits by Avoiding Addr. Translation v Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache, vs. Physical Cache âEvery time process is switched logically must flush the cache; otherwise get false hits Cost is time to flush + “compulsory” misses from empty cache âDealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address âI/O must interact with cache, so need virtual address v Solution to aliases âHW guarantee: each cache frame holds unique physical address âSW guarantee: lower n bits must have same address; as long as covers index field & direct mapped, they must be unique; called page coloring v Solution to cache flush âAdd process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process

Fall 2000Siddhartha Chatterjee37 Virtually Addressed Caches CPU TLB $ MEM VA PA Conventional Organization CPU $ TLB MEM VA PA Virtually Addressed Cache Translate only on miss Synonym Problem CPU $TLB MEM VA PA Tags PA Overlap $ access with VA translation: requires $ index to remain invariant across translation VA Tags L2 $

Fall 2000Siddhartha Chatterjee38 2. Avoiding Translation: Process ID impact v Black is uniprocess v Light Gray is multiprocess when flush cache v Dark Gray is multiprocess when use Process ID tag v Y axis: Miss Rates up to 20% v X axis: Cache size from 2 KB to 1024 KB v Fig 5.26

Fall 2000Siddhartha Chatterjee39 v If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag v Limits cache to page size: what if we want bigger caches and use same trick? âHigher associativity âPage coloring 2. Avoiding Translation: Index with Physical Portion of Address Page Address Page Offset Address Tag Index Block Offset

Fall 2000Siddhartha Chatterjee40 Cache Optimization Summary TechniqueMRMPHTComplexity Larger Block Size+–0 Higher Associativity+–1 Victim Caches+2 Pseudo-Associative Caches +2 HW Prefetching of Instr/Data+2 Compiler Controlled Prefetching+3 Compiler Reduce Misses+0 Priority to Read Misses+1 Subblock Placement ++1 Early Restart & Critical Word 1st +2 Non-Blocking Caches+3 Second Level Caches+2 Small & Simple Caches–+0 Avoiding Address Translation+2

Fall 2000Siddhartha Chatterjee41 Impact of Caches v : Speed = ƒ(no. operations) v 1997 âPipelined Execution & Fast Clock Rate âOut-of-Order completion âSuperscalar Instruction Issue v 1999: Speed = ƒ(non-cached memory accesses) v What does this mean for âCompilers, Operating Systems, Algorithms, Data Structures?