Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 34 – Virtual Memory II 11/19/2014: Primary Data – A new startup came out of stealth.

Similar presentations


Presentation on theme: "Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 34 – Virtual Memory II 11/19/2014: Primary Data – A new startup came out of stealth."— Presentation transcript:

1 inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 34 – Virtual Memory II 11/19/2014: Primary Data – A new startup came out of stealth mode. The Los Altos company raised about $63 million in funding to help in their plan to untrap data from traditional storage, much like server virtualization decoupled computing from servers in the data center. It does this through an appliance placed in the data center and software deployed in a customer's IT infrastructure. http://www.zdnet.com/can-wozniak-reshape-the- data-center-7000036018/ A powerful metadata server provides a logical abstraction of physical storage, while the policy engine intelligently automates data movement and placement to enable the vision of the software-defined datacenter: Simple, cost-effective, scalable performance.

2 CS61C L33 Virtual Memory I (2) Lustig, Fall 2014 © UCB Regs L2 Cache Memory Disk Tape Instr. Operands Blocks Pages Files Upper Level Lower Level Faster Larger Cache Blocks Thus far { { Next: Virtual Memory Another View of the Memory Hierarchy

3 CS61C L33 Virtual Memory I (3) Lustig, Fall 2014 © UCB Virtual Memory  Fully associative Cache for the disk  Provides program with illusion of a very large main memory:  Working set of “pages” reside in main memory - others reside on disk.  Enables OS to share memory, protect programs from each other  Today, more important for protection vs. just another level of memory hierarchy  Each process thinks it has all the memory to itself

4 CS61C L33 Virtual Memory I (4) Lustig, Fall 2014 © UCB 0 Physical Memory Virtual Memory CodeStatic Heap Stack 64 MB Mapping Virtual Memory to Physical Memory 0 Disk

5 CS61C L33 Virtual Memory I (5) Lustig, Fall 2014 © UCB Paging/Virtual Memory Multiple Processes User B: Virtual Memory  Code Static Heap Stack 0 Code Static Heap Stack A Page Table B Page Table User A: Virtual Memory  0 0 Physical Memory 64 MB

6 CS61C L33 Virtual Memory I (6) Lustig, Fall 2014 © UCB Review: Paging Terminology  Programs use virtual addresses (VAs)  Space of all virtual addresses called virtual memory (VM)  Divided into pages indexed by virtual page number (VPN)  Main memory indexed by physical addresses (PAs)  Space of all physical addresses called physical memory (PM)  Divided into pages indexed by physical page number (PPN) 6

7 CS61C L33 Virtual Memory I (7) Lustig, Fall 2014 © UCB Paging Organization (assume 1 KB pages) Addr Trans MAP Page is unit of mapping Page also unit of transfer from disk to physical memory page 0 1K 0 1024 31744 Virtual Memory Virtual Address page 1 page 31 1K 2048 page 2... page 0 0 1024 7168 Physical Address Physical Memory 1K page 1 page 7...

8 CS61C L33 Virtual Memory I (8) Lustig, Fall 2014 © UCB Virtual Memory Mapping Function  How large is main memory? Disk?  Don’t know! Designed to be interchangeable components  Need a system that works regardless of sizes  Use lookup table (page table) to deal with arbitrary mapping  Index lookup table by # of pages in VM (not all entries will be used/valid)  Size of PM will affect size of stored translation 8

9 CS61C L33 Virtual Memory I (9) Lustig, Fall 2014 © UCB Address Mapping: Page Table  Page Table functionality:  Incoming request is Virtual Address (VA), want Physical Address (PA)  Physical Offset = Virtual Offset (page-aligned)  So just swap Virtual Page Number (VPN) for Physical Page Number (PPN)  Implementation?  Use VPN as index into PT  Store PPN and management bits (Valid, Access Rights)  Does NOT store actual data (the data sits in PM) 9 Page offsetVirtual Page # Physical Page #

10 CS61C L33 Virtual Memory I (10) Lustig, Fall 2014 © UCB Page Table Layout10 VARPPN XXX... Virtual Address: VPN offset Page Table 1) Index into PT using VPN 2) Check Valid and Access Rights bits PPN 3) Concatenate PPN and offset Physical Address 4) Use PA to access memory offset

11 CS61C L33 Virtual Memory I (11) Lustig, Fall 2014 © UCB Notes on Page Table  Solves Fragmentation problem: all chunks same size, so all holes can be used  OS must reserve “Swap Space” on disk for each process  To grow a process, ask Operating System  If unused pages, OS uses them first  If not, OS swaps some old pages to disk  (Least Recently Used to pick pages to swap)  Each process has own Page Table  Will add details, but Page Table is essence of Virtual Memory

12 CS61C L33 Virtual Memory I (12) Lustig, Fall 2014 © UCB  A program’s address space contains 4 regions:  stack: local variables, grows downward  heap: space requested for pointers via malloc() ; resizes dynamically, grows upward  static data: variables declared outside main, does not grow or shrink  code: loaded when program starts, does not change code static data heap stack For now, OS somehow prevents accesses between stack and heap (gray hash lines). ~ FFFF FFFF hex ~ 0 hex Why would a process need to “grow”?

13 Question: How many bits wide are the following fields? 16 KiB pages 40-bit virtual addresses 64 GiB physical memory 26 A) 2420 B) 22 C) D) VPN PPN 13

14 Question: How many bits wide are the following fields? 16 KiB pages 40-bit virtual addresses 64 GiB physical memory 26 A) 2420 B) 22 C) D) VPN PPN 14

15 CS61C L33 Virtual Memory I (15) Lustig, Fall 2014 © UCB 15 VA1 User 1 Virtual Address Space User 2 Virtual Address Space PT User 1 PT User 2 VA2 Retrieving Data from Memory 1) Access page table for address translation Physical Memory 2) Access correct physical address Requires two accesses of physical memory!

16 CS61C L33 Virtual Memory I (16) Lustig, Fall 2014 © UCB Translation Look-Aside Buffers (TLBs)  TLBs usually small, typically 128 - 256 entries  Like any other cache, the TLB can be direct mapped, set associative, or fully associative Processor TLB Lookup Cache Main Memory VA PA miss hit data Trans- lation hit miss Can cache hold requested data if corresponding page is not in physical memory? No!

17 CS61C L33 Virtual Memory I (17) Lustig, Fall 2014 © UCB TLBs vs. Caches  TLBs usually small, typically 16 – 512 entries  TLB access time comparable to cache (« main memory)  TLBs can have associativity  Usually fully/highly associative 17 D$ / I$ Memory Address Data at memory address Access next cache level / main memory On miss: TLB VPNPPN Access Page Table in main memory On miss:

18 CS61C L33 Virtual Memory I (18) Lustig, Fall 2014 © UCB Address Translation Using TLB18 TLB Tag TLB IndexPage Offset TLB TagPPN (used just like in a cache)... TLB VPN PPN Page Offset TagIndexOffset Virtual Address Physical Address TagBlock Data... Data Cache PA split two different ways! Note: TIO for VA & PA unrelated

19 Question: How many bits wide are the following? 16 KiB pages 40-bit virtual addresses 64 GiB physical memory 2-way set associative TLB with 512 entries 121438 A) 18845 B) 141240 C) D) TLB TagTLB IndexTLB Entry19 ValidDirtyRefAccess RightsTLB TagPPN XXXXX

20 Question: How many bits wide are the following? 16 KiB pages 40-bit virtual addresses 64 GiB physical memory 2-way set associative TLB with 512 entries 121438 A) 18845 B) 141240 C) D) TLB TagTLB IndexTLB Entry20 ValidDirtyRefAccess RightsTLB TagPPN XXXXX

21 CS61C L33 Virtual Memory I (21) Lustig, Fall 2014 © UCB 1) Check TLB (input: VPN, output: PPN)  TLB Hit: Fetch translation, return PPN  TLB Miss: Check page table (in memory)  Page Table Hit: Load page table entry into TLB  Page Table Miss (Page Fault): Fetch page from disk to memory, update corresponding page table entry, then load entry into TLB 2) Check cache (input: PPN, output: data)  Cache Hit: Return data value to processor  Cache Miss: Fetch data value from memory, store it in cache, return it to processor 21 Fetching Data on a Memory Read

22 CS61C L33 Virtual Memory I (22) Lustig, Fall 2014 © UCB Page Faults  Load the page off the disk into a free page of memory  Switch to some other process while we wait  Interrupt thrown when page loaded and the process' page table is updated  When we switch back to the task, the desired data will be in memory  If memory full, replace page (LRU), writing back if necessary, and update both page table entries  Continuous swapping between disk and memory called “thrashing” 22

23 CS61C L33 Virtual Memory I (23) Garcia, Spring 2014 © UCB  User program view:  Contiguous memory  Start from some set VA  “Infinitely” large  Is the only running program  Reality:  Non-contiguous memory  Start wherever available memory is  Finite size  Many programs running simultaneously  Virtual memory provides:  Illusion of contiguous memory  All programs starting at same set address  Illusion of ~ infinite memory (2 32 or 2 64 bytes)  Protection, Sharing  Implementation:  Divide memory into chunks (pages)  OS controls page table that maps virtual into physical addresses  memory as a cache for disk  TLB is a cache for the page table Virtual Memory Summary 23


Download ppt "Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 34 – Virtual Memory II 11/19/2014: Primary Data – A new startup came out of stealth."

Similar presentations


Ads by Google