Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.

Similar presentations


Presentation on theme: "Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative."— Presentation transcript:

1 Chap. 7.4: Virtual Memory

2 CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative for N-way set assoc: choice of N block replacement policy 2nd level cache? Write through v. write back? Use performance model to pick between choices, depending on programs, technology, budget,...

3 CS61C L35 VM I (3) Garcia © UCB Another View of the Memory Hierarchy Regs L2 Cache Memory Disk Tape Instr., Operands Blocks Pages Files Upper Level Lower Level Faster Larger Cache Blocks Thus far { { Next: Virtual Memory

4 Recall: illusion of memory Programmer ’ s view about memory Unlimited amount of fast memory How to create the above illusion? 無限大的快速記憶體 Scene: library Book shelf desk books

5 Virtual memory 虛擬記憶體 Create an illusion of unlimited memory Motivation A collection of programs running at once on a machine Total memory required by all programs > the amount of main memory available Allow a single user program to exceed the size of primary memory Software solution: programmers divided programs into pieces, called overlays 多個程式同時執行,大於主記憶體 單一程式大於主記憶體

6 Process The state of a program is called a process State of a program: Program counter (PC), registers, page table, … To allow another program to use the CPU, we must save this state Context switching OS (operating system) changes from running process P1 to running process P2 行程

7 CS61C L35 VM I (7) Garcia © UCB Simple Example: Base and Bound Reg 0  OS User A User B User C $base $base+ $bound Want discontinuous mapping Process size >> mem Addition not enough! => What to do? Enough space for User D, but discontinuous (“fragmentation problem”)

8 Virtual memory system Main memory as a cache for the secondary storage (disk) cache Disk Cache for disk How to map to disk?

9 More about Virtual Memory Called “ Virtual Memory ” Also allows OS to share memory, protect programs from each other Today, more important for protection vs. just another level of memory hierarchy Each process thinks it has all the memory to itself Historically, it predates caches

10 CS61C L35 VM I (10) Garcia © UCB Mapping Virtual Memory to Physical Memory 0 Physical Memory  Virtual Memory CodeStatic Heap Stack 64 MB Divide into equal sized chunks (about 4 KB - 8 KB) 0 Any chunk of Virtual Memory assigned to any chuck of Physical Memory (“page”)

11 CS61C L35 VM I (11) Garcia © UCB Paging Organization (assume 1 KB pages) Addr Trans MAP Page is unit of mapping page 0 1K 0 1024 31744 Virtual Memory Virtual Address page 1 page 31 1K 2048 page 2... page 0 0 1024 7168 Physical Address Physical Memory 1K page 1 page 7...

12 VM address translation (1) Page size=2 12 Virtual address Physical address Address space=2 32 =4GB Address space=2 30 =1GB How to translate virtual addresses ? Ans: Direct mapped, set-associative, full-associative?

13 VM address translation (2) Each program has its own page table 2 20 Address translation: Full-associative

14 CS61C L35 VM I (14) Garcia © UCB Paging/Virtual Memory Multiple Processes User B: Virtual Memory  Code Static Heap Stack 0 Code Static Heap Stack A Page Table B Page Table User A: Virtual Memory  0 0 Physical Memory 64 MB

15 Page fault (~cache miss) Virtual Page number

16 Key issues in VM Page fault penalty takes millions of cycles Pages should be large enough to amortize the high access time 32-64KB Reduce the page fault rate: full-associative placement Use clever algorithm to replace pages while page faults occur LRU (least recently used) ? Use write-back instead of write-through

17 Make the address translation faster Motivation: page tables are stored in main memory Load/store virtual address Page table Physical address In memory Main memory data 2 memory accesses !!! Use a cache to store page table

18 Translation-lookaside buffer (TLB) TLB miss

19 CS61C L35 VM I (19) Garcia © UCB VM, TLB, and cache TLBs usually small, typically 128 - 256 entries Like any other cache, the TLB can be direct mapped, set associative, or fully associative Processor VA Cache miss hit data TLB Lookup PA hit miss Main Memory Trans- lation On TLB miss, get page table entry from main memory

20 VM, TLB, and caches 31 30 … 12 11 … 1 0 Virtual address TLB cache

21 Protection with Virtual memory VM allows the sharing of a single main memory by multiple processes OS place the page tables in the address space of OS Independent virtual pages map to disjoint physical page User process can not change the page table mapping

22 CS61C L35 VM I (22) Garcia © UCB Comparing the 2 levels of hierarchy Cache VersionVirtual Memory vers. Block or LinePage MissPage Fault Block Size: 32-64BPage Size: 4K-8KB Placement:Fully Associative Direct Mapped, N-way Set Associative Replacement: Least Recently Used LRU or Random(LRU) Write Thru or BackWrite Back

23 CS61C L36 VM II (23)Garcia © UCB Review: 4 Qs for any Memory Hierarchy °Q1: Where can a block be placed? One place (direct mapped) A few places (set associative) Any place (fully associative) °Q2: How is a block found? Indexing (as in a direct-mapped cache) Limited search (as in a set-associative cache) Full search (as in a fully associative cache) Separate lookup table (as in a page table) °Q3: Which block is replaced on a miss? Least recently used (LRU) Random °Q4: How are writes handled? Write through (Level never inconsistent w/lower) Write back (Could be “dirty”, must have dirty bit)

24 CS61C L36 VM II (24)Garcia © UCB °Block 12 placed in 8 block cache: Fully associative Direct mapped 2-way set associative -Set Associative Mapping = Block # Mod # of Sets 0 1 2 3 4 5 6 7 Block no. Fully associative: block 12 can go anywhere 0 1 2 3 4 5 6 7 Block no. Direct mapped: block 12 can go only into block 4 (12 mod 8) 0 1 2 3 4 5 6 7 Block no. Set associative: block 12 can go anywhere in set 0 (12 mod 4) Set 0 Set 1 Set 2 Set 3 Q1: Where block placed in upper level?

25 CS61C L36 VM II (25)Garcia © UCB °Direct indexing (using index and block offset), tag compares, or combination °Increasing associativity shrinks index, expands tag Block offset Block Address Tag Index Q2: How is a block found in upper level? Set Select Data Select

26 CS61C L36 VM II (26)Garcia © UCB °Easy for Direct Mapped °Set Associative or Fully Associative: Random LRU (Least Recently Used) Miss Rates Associativity:2-way 4-way 8-way SizeLRU Ran LRU Ran LRU Ran 16 KB5.2%5.7% 4.7%5.3% 4.4%5.0% 64 KB1.9%2.0% 1.5%1.7% 1.4%1.5% 256 KB1.15%1.17% 1.13% 1.13% 1.12% 1.12% Q3: Which block replaced on a miss?

27 CS61C L36 VM II (27)Garcia © UCB Q4: What to do on a write hit? °Write-through update the word in cache block and corresponding word in memory °Write-back update word in cache block allow memory word to be “stale” => add ‘dirty’ bit to each line indicating that memory be updated when block is replaced => OS flushes cache before I/O !!! °Performance trade-offs? WT: read misses cannot result in writes WB: no writes of repeated writes

28 Short conclusion of VM Virtual memory Cache between memory and disk VM allows A program to expand its address space Sharing of main memory among multiple processes To reduce page fault Large pages – use spatial locality Full associative mapping OS use clever replacement method (LRU, … )


Download ppt "Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative."

Similar presentations


Ads by Google