1 Virtual Memory: Systems Level Andrew Case Slides adapted from jinyang Li, Randy Bryant and Dave O’Hallaron.

Slides:



Advertisements
Similar presentations
Virtual Memory October 25, 2006 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Advertisements

Carnegie Mellon 1 Virtual Memory: Concepts : Introduction to Computer Systems 15 th Lecture, Oct. 14, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
Instructors: Randy Bryant and Dave O’Hallaron
Today Virtual memory (VM) Overview and motivation
Virtual Memory Topics Motivations for VM Address translation
P6/Linux Memory System Oct. 31, 2002
Memory System Case Studies Oct. 28, 2004 Topics P6 address translation x86-64 extensions Linux memory management Linux page fault handling Memory mapping.
Virtual Memory October 29, 2007 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
– 1 – P6 (PentiumPro,II,III,Celeron) memory system bus interface unit DRAM Memory bus instruction fetch unit L1 i-cache L2 cache cache bus L1 d-cache inst.
Virtual Memory (Review) Programs refer to virtual memory addresses  movl (%ecx),%eax  Conceptually very large array of bytes  Each byte has its own.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
1 Seoul National University Virtual Memory: Systems.
1 Virtual Memory. 2 Outline Pentium/Linux Memory System Core i7 Suggested reading: 9.6, 9.7.
1 Virtual Memory: Concepts Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Carnegie Mellon 1 Virtual Memory: Concepts Instructor: Rabi Mahapatra (TAMU) Slides: Randy Bryant and Dave O’Hallaron (CMU)
Pentium III Memory.
Carnegie Mellon Introduction to Computer Systems /18-243, spring th Lecture, Mar. 24 th Instructors: Gregory Kesden and Markus Püschel.
Memory System Case Studies Oct. 31, 2007 Topics P6 address translation x86-64 extensions Linux memory management Linux page fault handling Memory mapping.
University of Amsterdam Computer Systems – virtual memory Arnoud Visser 1 Computer Systems Virtual Memory.
Instructors: Majd Sakr and Khaled Harras
Processes and Virtual Memory
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts CENG331 - Computer Organization.
CS 105 “Tour of the Black Holes of Computing!”
Carnegie Mellon 1 Virtual Memory: Systems / : Introduction to Computer Systems 17 th Lecture, Oct. 27, 2011 Instructors: Dave O’Hallaron,
1 Virtual Memory. 2 Outline Multilevel page tables Different points of view Pentium/Linux Memory System Memory Mapping Suggested reading: 10.6, 10.3,
CSE 153 Design of Operating Systems Winter 2015 Lecture 11: Paging/Virtual Memory Some slides modified from originals by Dave O’hallaron.
P6/Linux Memory System Topics P6 address translation Linux memory management Linux page fault handling memory mapping vm2.ppt CS 105 “Tour of the Black.
1 Virtual Memory. 2 Outline Multilevel page tables Different points of view Pentium/Linux Memory System Memory Mapping Suggested reading: 10.6, 10.3,
Memory System Case Studies Mar. 20, 2008 Topics P6 address translation x86-64 extensions Linux memory management Linux page fault handling Memory mapping.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Systems CSCE312: Computer Organization.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Systems CENG331 - Computer Organization.
Carnegie Mellon 1 Virtual Memory: Systems / : Introduction to Computer Systems 17 th Lecture, Mar. 19, 2015 Instructors: Seth Copen Goldstein,
University of Washington Indirection in Virtual Memory 1 Each process gets its own private virtual address space Solves the previous problems Physical.
Virtual Memory: Systems
Alan L. Cox Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
1 Virtual Memory. 2 Outline Case analysis –Pentium/Linux Memory System –Core i7 Suggested reading: 9.7.
Memory System Case Studies Mar. 20, 2008 Topics P6 address translation x86-64 extensions Linux memory management Linux page fault handling Memory mapping.
Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory Samira Khan Apr 27, 2017.
CS 105 “Tour of the Black Holes of Computing!”
Memory Management and Virtual Memory
Virtual Memory: Systems
Today How was the midterm review? Lab4 due today.
Virtual Memory: Concepts CENG331 - Computer Organization
Memory Management and Virtual Memory
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Systems /18-213/14-513/15-513: Introduction to Computer Systems 18th Lecture, October 25, 2018.
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Concepts /18-213/14-513/15-513: Introduction to Computer Systems 17th Lecture, October 23, 2018.
Virtual Memory: Systems
Virtual Memory: Systems
Memory System Case Studies Oct. 13, 2008
Pentium/Linux Memory System
P6 (PentiumPro,II,III,Celeron) memory system
Pentium III / Linux Memory System April 4, 2000
Virtual Memory.
Virtual Memory Nov 27, 2007 Slide Source:
Instructor: Phil Gibbons
Virtual Memory: Systems CSCI 380: Operating Systems
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2019
CSE 153 Design of Operating Systems Winter 2019
CS 105 “Tour of the Black Holes of Computing!”
Instructor: Phil Gibbons
P6 (PentiumPro,II,III,Celeron) memory system
Presentation transcript:

1 Virtual Memory: Systems Level Andrew Case Slides adapted from jinyang Li, Randy Bryant and Dave O’Hallaron

2 Topics Memory system example Case study: Core i7 memory system Memory mapping in Linux

3 Memory System Example Addressing  14-bit virtual addresses  12-bit physical address  Page size = 64 bytes PPOPPN Physical Page Number Physical Page Offset VPO VPN Virtual Page Number Virtual Page Offset

4 Memory System: Page Table First 16 entries (out of 256) shown 10D0F 1110E 12D0D 0–0C 0–0B 1090A ValidPPNVPN 0–07 0– – – ValidPPNVPN

5 Simple Memory System: TLB 16 entries 4-way associative VPO VPN TLBI TLB Tag 0–021340A10D030–073 0–030–060–080–022 0–0A0–040–0212D –0010D090–030 ValidPPNTagValidPPNTagValidPPNTagValidPPNTagSet TLB Index Works the same way as our Caching systems (based on similar hardware caching)

6 Simple Memory System: Cache 16 lines, 4-byte block size Physically addressed Direct mapped PPOPPN CO CI CT 03DFC ––––0316 1DF D5 098F6D –––– B2 –––– B3B2B1B0ValidTagIdx ––––014F D31B E D ––––012C ––––00BB 3BDA159312DA –––– A1248 B3B2B1B0ValidTagIdx CT – Cache Tag CI – Cache Index CO – Cache Offset

7 Address Translation Example #1 Virtual Address: 0x03D4 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address (based on cache state) CO ___CI___CT ____ Hit? __ Byte: ____ VPO VPN TLBI TLBT PPO PPN CO CI CT x0F0x30x03 Y N0x0D x50x0DY0x36 results based on TLB/cache states

8 Address Translation Example #2 Virtual Address: 0x0B8F VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO ___CI___CT ____ Hit? __ Byte: ____ VPO VPN TLBI TLBT PPO PPN CO CI CT x2E20x0B N YTBD results based on TLB/cache states

9 Address Translation Example #3 Virtual Address: 0x0020 VPN ___TLBI ___TLBT ____ TLB Hit? __Page Fault? __ PPN: ____ Physical Address CO___CI___CT ____ Hit? __ Byte: ____ VPO VPN TLBI TLBT PPO PPN CO CI CT x000 N N0x x80x28NMem results based on TLB/cache states

10 Topics Simple memory system example Case study: Core i7/Linux memory system Memory mapping

11 Intel Core i7 Memory System Processor package To other cores To I/O bridge Core x4 L1 d-cache 32 KB, 8-way L1 d-cache 32 KB, 8-way L2 unified cache 256 KB, 8-way L2 unified cache 256 KB, 8-way L3 unified cache 8 MB, 16-way (shared by all cores) L3 unified cache 8 MB, 16-way (shared by all cores) Main memory Registers L1 d-TLB 64 entries, 4-way L1 d-TLB 64 entries, 4-way L1 i-TLB 128 entries, 4-way L1 i-TLB 128 entries, 4-way L2 unified TLB 512 entries, 4-way L2 unified TLB 512 entries, 4-way L1 i-cache 32 KB, 8-way L1 i-cache 32 KB, 8-way MMU (addr translation) MMU (addr translation) Instruction fetch Instruction fetch DDR3 Memory controller 3 x GB/s 32 GB/s total (shared by all cores) DDR3 Memory controller 3 x GB/s 32 GB/s total (shared by all cores) QuickPath interconnect GB/s each QuickPath interconnect GB/s each

12 Review of Symbols Basic Parameters  N = 2 n : Number of addresses in virtual address space  M = 2 m : Number of addresses in physical address space  P = 2 p : Page size (bytes) Components of the virtual address (VA)  PDBR: Page directory base registers  TLB: Translation lookaside buffer  TLBI: TLB index  TLBT: TLB tag  VPO: Virtual page offset  VPN: Virtual page number Components of the physical address (PA)  PPO: Physical page offset (same as VPO)  PPN: Physical page number  CO: Byte offset within cache line  CI: Cache index  CT: Cache tag

13 Core i7 Level 1-3 Page Table Entries Page table PAUnusedGPSACDWTU/SR/WP Each entry references a 4K child page table P: child page table present in physical memory or not R/W: Read-only or read-write access permission U/S: User or supervisor (kernel) mode access WT: Write-through or write-back cache policy CD: Caching disabled or enabled A: Reference bit (set by MMU on reads and writes, cleared by software). PS: Page size either 4 KB or 4 MB (defined for Level 1 PTEs only). G: Global page (don’t evict from TLB on task switch) Page table physical base address: 40 most significant bits of physical page table address (forces page tables to be 4KB aligned) Unused

14 Core i7 Level 4 Page Table Entries PPNUnusedG D ACDWTU/SR/WP Each entry references a 4K page P: page is present in memory or not R/W: Read-only or read-write access permission U/S: User or supervisor (kernel) mode access WT: Write-through or write-back cache policy CD: Caching disabled or enabled A: Reference bit (set by MMU on reads and writes, cleared by software) D: Dirty bit (set by MMU on writes, cleared by software) G: Global page (don’t evict from TLB on task switch) Page physical base address: 40 most significant bits of physical page address (forces pages to be 4KB aligned) Unused

15 Core i7 Page Table Translation PDBR Physical address of page Physical address of L1 PT Stored in CR3 register 9 VPO 912 Virtual address Level4 PT Page table L4 PTE PPNPPO 4012 Physical address Offset into physical and virtual page VPN 3 VPN 4 VPN 2 VPN 1 Level3 PT Page middle directory L3 PTE Level2 PT Page upper directory L2 PTE Level1 PT Page global directory L1 PTE / / / / / 12 / 512 GB region per entry 1 GB region per entry 2 MB region per entry 4 KB region per entry

16 End-to-end Core i7 Address Translation Memory Request VPNVPO 3612 L1 TLB VPN1VPN2 99 PTE PDBR (stored in CR3 register) PPNPPO 4012 Page tables TLB miss TLB hit Physical address (PA) Result 32/64 L2, L3, and main memory L1 cache L1 hit L1 miss Virtual address (VA) VPN3VPN4 99 PTE

17 End-to-end Core i7 Address Translation VPNVPO 3612 TLBTTLBI L1 TLB (16 sets, 4 entries/set) PPNPPO 4012 TLB miss TLB hit Physical address (PA) Result 32/64... CTCO 406 CI 6 L2, L3, and main memory L1 d-cache (64 sets, 8 lines/set) L1 hit L1 miss Virtual address (VA) VPN1VPN2 99 PTE PDBR (stored in CR3 register) Page tables VPN3VPN4 99 PTE Memory Request

18 Virtual Memory of a Linux Process Kernel code and data Memory mapped region for shared libraries Runtime heap (malloc) Program text (.text) Initialized data (.data) Uninitialized data (.bss) User stack 0 % esp Process virtual memory Identical for each process Process-specific data structs (ptables, task and mm structs, kernel stack) Kernel virtual memory 0x (32) 0x (64) Different for each process

19 vm_next Linux Organizes VM as Collection of “Areas” task_struct mm_struct pgdmm mmap vm_area_struct vm_end vm_prot vm_start vm_end vm_prot vm_start vm_end vm_prot vm_next vm_start Process virtual memory Text Data Shared libraries 0 pgd:  Page global directory address  Points to L1 page table vm_prot:  Read/write permissions for this area vm_flags  Pages shared with other processes or private to this process vm_flags

20 Linux Page Fault Handling read 1 write 2 read 3 vm_next vm_area_struct vm_end vm_prot vm_start vm_end vm_prot vm_start vm_end vm_prot vm_next vm_start Process virtual memory text data shared libraries vm_flags Segmentation fault: accessing a non-existing page Normal page fault Protection exception: e.g., violating permission by writing to a read-only page (Linux reports as Segmentation fault)

21 Memory Mapping VM areas initialized by associating them with disk objects.  Process is known as memory mapping. Area can be backed by (i.e., get its initial values from) :  Regular file on disk (e.g., an executable object file)  Initial page bytes come from a section of a file  Nothing  First fault will allocate a physical page full of 0's (demand-zero page) If a dirty page is evicted, OS copies it to a special swap area on disk.

22 Demand paging Key point: OS delays copying virtual pages into physical memory until they are referenced! Crucial for time and space efficiency

23 Shared objects under demand-paging Process 1 maps the shared object. Shared object Physical memory Process 1 virtual memory Process 2 virtual memory

24 Shared objects under demand-paging Shared object Physical memory Process 1 virtual memory Process 2 virtual memory Process 2 maps the shared object. Note: the same object can be mapped to different virtual addresses

25 Sharing: Copy-on-write (COW) Objects Two processes mapping a private copy-on-write (COW) object. Area flagged as private copy-on- write PTEs in private areas are flagged as read-only Private copy-on-write object Physical memory Process 1 virtual memory Process 2 virtual memory Private copy-on-write area

26 Sharing: Copy-on-write (COW) Objects Instruction writing to private page triggers protection fault. Handler creates new R/W page. Instruction restarts upon handler return. Copying deferred as long as possible! Private copy-on-write object Physical memory Process 1 virtual memory Process 2 virtual memory Copy-on-write Write to private copy-on-write page

27 The fork Function Clones the current process Creates virtual address for new process  An exact copy of parent’s memory mapping for the child  Flag each memory area in both processes at COW and set each page in both processes as read-only  Each process starts creating it’s own data independently Subsequent writes create new pages using COW mechanism.

28 The exec Function Memory mapped region for shared libraries Runtime heap (via malloc) Program text (.text) Initialized data (.data) Uninitialized data (.bss) User stack 0 Private, demand-zero libc.so.data.text Shared, file-backed Private, demand-zero Private, file-backed a.out.data.text To load and run a new program from the current process using exec :  Free old mapped areas and page tables  Create new mapped areas and corresponding page table entries  Set PC to entry point in.text  Subsequently, OS will page fault in code and load data pages as needed

29 User-Level Memory Mapping void *mmap(void *addr, int len, int prot, int flags, int fd, int offset ) Good for:  Process to process memory sharing (shared RO data, or IPC)  Reading/writing to large files non-sequentially (if you need to move around in the file) Maps len bytes starting at offset offset of the file specified by file description fd, preferably at address addr  addr: may be NULL for “pick an address” prot : PROT_READ, PROT_WRITE,...  flags : MAP_ANON, MAP_PRIVATE, MAP_SHARED, … Returns a pointer to start of mapped area (may not map to addr depending on if the kernel allowed that)

30 User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset ) len bytes start (or address chosen by kernel) Process virtual memory Disk file specified by file descriptor fd len bytes offset (bytes) 0 0

31 Using mmap to Copy Files #include "csapp.h" /* * mmapcopy - uses mmap to copy * file fd to stdout */ void mmapcopy(int fd, int size) { /* Ptr to mem-mapped VM area */ char *bufp; bufp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0); write(1, bufp, size); return; } /* mmapcopy driver */ int main(int argc, char **argv) { struct stat stat; int fd; /* Check for required cmdline arg */ if (argc != 2) { printf("usage: %s \n”, argv[0]); exit(0); } /* Copy the input arg to stdout */ fd = open(argv[1], O_RDONLY, 0); fstat(fd, &stat); mmapcopy(fd, stat.st_size); exit(0); } No extra copy has to be created in user space (kernel caching) Doesn’t have to have a system call to process data