Virtual Memory (Review)

Slides:



Advertisements
Similar presentations
Virtual Memory October 25, 2006 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Advertisements

Carnegie Mellon 1 Virtual Memory: Concepts : Introduction to Computer Systems 15 th Lecture, Oct. 14, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
Virtual Memory October 29, 2007 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Carnegie Mellon 1 Virtual Memory: Concepts / : Introduction to Computer Systems 16 th Lecture, Oct. 21, 2014 Instructors: Greg Ganger, Greg.
Virtual Memory: Concepts
Virtual & Dynamic Memory Management Summer 2014 COMP 2130 Intro Computer Systems Computing Science Thompson Rivers University.
Carnegie Mellon 1 Saint Louis University Virtual Memory CSCI 224 / ECE 317: Computer Architecture Instructor: Prof. Jason Fritts Slides adapted from Bryant.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
1 Seoul National University Virtual Memory: Systems.
1 Virtual Memory. 2 Outline Pentium/Linux Memory System Core i7 Suggested reading: 9.6, 9.7.
1 Virtual Memory: Concepts Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Carnegie Mellon 1 Virtual Memory: Concepts Instructor: Rabi Mahapatra (TAMU) Slides: Randy Bryant and Dave O’Hallaron (CMU)
University of Amsterdam Computer Systems – virtual memory Arnoud Visser 1 Computer Systems Virtual Memory.
Processes and Virtual Memory
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts CENG331 - Computer Organization.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs CS 105 “Tour of the Black Holes of Computing!”
1 Virtual Memory. 2 Outline Multilevel page tables Different points of view Pentium/Linux Memory System Memory Mapping Suggested reading: 10.6, 10.3,
Virtual Memory Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs.
1 Virtual Memory (I). 2 Outline Physical and Virtual Addressing Address Spaces VM as a Tool for Caching VM as a Tool for Memory Management VM as a Tool.
University of Washington Indirection in Virtual Memory 1 Each process gets its own private virtual address space Solves the previous problems Physical.
Dynamic Memory Management Winter 2013 COMP 2130 Intro Computer Systems Computing Science Thompson Rivers University.
Virtual Memory: Systems
Alan L. Cox Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
1 Virtual Memory. 2 Outline Case analysis –Pentium/Linux Memory System –Core i7 Suggested reading: 9.7.
Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory Samira Khan Apr 27, 2017.
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory Samira Khan Apr 25, 2017.
Section 9: Virtual Memory (VM)
Virtual Memory: Systems
Section 9: Virtual Memory (VM)
Today How was the midterm review? Lab4 due today.
Virtual Memory: Concepts CENG331 - Computer Organization
Virtual Memory.
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Systems /18-213/14-513/15-513: Introduction to Computer Systems 18th Lecture, October 25, 2018.
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory II CSE 351 Autumn 2016
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Concepts /18-213/14-513/15-513: Introduction to Computer Systems 17th Lecture, October 23, 2018.
Virtual Memory: Systems
Virtual Memory: Systems
CS 105 “Tour of the Black Holes of Computing!”
ECE Dept., University of Toronto
Memory System Case Studies Oct. 13, 2008
Pentium/Linux Memory System
P6 (PentiumPro,II,III,Celeron) memory system
Virtual Memory CSCI 380: Operating Systems Lecture #7 -- Review and Lab Suggestions William Killian.
Instructors: Majd Sakr and Khaled Harras
Pentium III / Linux Memory System April 4, 2000
Virtual Memory.
Virtual Memory Nov 27, 2007 Slide Source:
Instructor: Phil Gibbons
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Virtual Memory: Systems CSCI 380: Operating Systems
Virtual Memory II CSE 351 Winter 2018
CS 105 “Tour of the Black Holes of Computing!”
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS 105 “Tour of the Black Holes of Computing!”
CSCI 380: Operating Systems William Killian
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2019
CSE 153 Design of Operating Systems Winter 2019
CS 105 “Tour of the Black Holes of Computing!”
Instructor: Phil Gibbons
P6 (PentiumPro,II,III,Celeron) memory system
Presentation transcript:

Virtual Memory (Review) Programs refer to virtual memory addresses movl (%ecx),%eax Conceptually very large array of bytes Each byte has its own address Actually implemented with hierarchy of different memory types System provides address space private to particular “process” Allocation: Compiler and run-time system Where in single virtual address space each program object is to be stored But why virtual and not physical memory? 00∙∙∙∙∙∙0 FF∙∙∙∙∙∙F

Problem 1: How Does Everything Fit? 64-bit addresses: 16 Exabyte Physical main memory: Few Gigabytes ? And there are many processes ….

Problem 2: Memory Management Physical main memory Process 1 Process 2 Process 3 … Process n stack heap .text .data … What goes where? x

Problem 3: How To Protect Physical main memory Process i Process j Problem 4: How To Share? Physical main memory Process i Process j

Solution: Level Of Indirection Virtual memory mapping Process 1 Physical memory Virtual memory Process n Each process gets its own private memory space Solves the previous problems

Address Spaces Linear address space: Ordered set of contiguous non-negative integer addresses: {0, 1, 2, 3, … } Virtual address space: Set of N = 2n virtual addresses {0, 1, 2, 3, …, N-1} Physical address space: Set of M = 2m physical addresses {0, 1, 2, 3, …, M-1} Clean distinction between data (bytes) and their attributes (addresses) Each object can now have multiple addresses Every byte in main memory: One physical address One (or more) virtual addresses

A System Using Physical Addressing Main memory 0: 1: 2: Physical address (PA) 3: CPU 4: 5: 6: 7: 8: ... M-1: Data word Used in “simple” systems with embedded microcontrollers In devices such as like cars, elevators, digital picture frames, ...

A System Using Virtual Addressing Main memory 0: CPU Chip 1: 2: Virtual address (VA) Physical address (PA) 3: CPU MMU 4: 5: 6: 7: 8: ... M-1: Data word Used in all modern desktops, laptops, workstations One of the great ideas in computer science MMU checks the cache

Why Virtual Addressing? Simplifies memory management for programmers Each process gets an identical, full, private, linear address space Isolates address spaces One process can’t interfere with another’s memory Because they operate in different address spaces User process cannot access privileged information Different sections of address spaces have different permissions

Why Virtual Memory? Efficient use of limited main memory (RAM) Use RAM as a cache for the parts of a virtual address space Some non-cached parts stored on disk Some (unallocated) non-cached parts stored nowhere Keep only active areas of virtual address space in memory Transfer data back and forth as needed

Disk VM as a Tool for Caching Virtual memory: array of N = 2n contiguous bytes Think of the array (allocated part) as being stored on disk Physical main memory (DRAM) = cache for allocated virtual memory Blocks are called pages; size = 2p Disk Virtual memory Physical memory VP 0 Unallocated VP 1 Cached Empty PP 0 Uncached PP 1 Unallocated Empty Cached Uncached Empty Cached PP 2m-p-1 2m-1 VP 2n-p-1 Uncached 2n-1 Virtual pages (VP's) stored on disk Physical pages (PP's) cached in DRAM

Memory Hierarchy: Core 2 Duo Not drawn to scale L1/L2 cache: 64 B blocks ~4 MB ~4 GB ~500 GB L1 I-cache D-cache L2 unified cache Main Memory Disk 32 KB CPU Reg Throughput: 16 B/cycle 8 B/cycle 2 B/cycle 1 B/30 cycles Latency: 3 cycles 14 cycles 100 cycles millions Miss penalty (latency): 30x Miss penalty (latency): 10,000x

DRAM Cache Organization DRAM cache organization driven by the enormous miss penalty DRAM is about 10x slower than SRAM Disk is about 10,000x slower than DRAM For first byte, faster for next byte Consequences Large page (block) size: typically 4-8 KB, sometimes 4 MB Fully associative Any VP can be placed in any PP Requires a “large” mapping function – different from CPU caches Highly sophisticated, expensive replacement algorithms Too complicated and open-ended to be implemented in hardware Write-back rather than write-through

Address Translation: Page Tables A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Here: 8 VPs Per-process kernel data structure in DRAM Physical memory (DRAM) Physical page number or disk address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Address Translation With a Page Table Virtual address Page table base register (PTBR) Virtual page number (VPN) Virtual page offset (VPO) Page table Page table address for process Valid Physical page number (PPN) Valid bit = 0: page not in memory (page fault) Physical page number (PPN) Physical page offset (PPO) Physical address

Page Hit Page hit: reference to VM word that is in physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Page Miss Page miss: reference to VM word that is not in physical memory Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Why does it work? Locality Virtual memory works because of locality At any point in time, programs tend to access a set of active virtual pages called the working set Programs with better temporal locality will have smaller working sets If (working set size < main memory size) Good performance for one process after compulsory misses If ( SUM(working set sizes) > main memory size ) Thrashing: Performance meltdown where pages are swapped (copied) in and out continuously

VM as a Tool for Memory Management Key idea: each process has its own virtual address space It can view memory as a simple linear array Mapping function scatters addresses through physical memory Well chosen mappings simplify memory allocation and management Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

VM as a Tool for Memory Management Memory allocation Each virtual page can be mapped to any physical page A virtual page can be stored in different physical pages at different times Sharing code and data among processes Map virtual pages to the same physical page (here: PP 6) Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

Simplifying Linking and Loading Memory invisible to user code Kernel virtual memory Linking Each program has similar virtual address space Code, stack, and shared libraries always start at the same address Loading execve() allocates virtual pages for .text and .data sections = creates PTEs marked as invalid The .text and .data sections are copied, page by page, on demand by the virtual memory system 0xc0000000 User stack (created at runtime) %esp (stack pointer) Memory-mapped region for shared libraries 0x40000000 brk Run-time heap (created by malloc) Read/write segment (.data, .bss) Loaded from the executable file Read-only segment (.init, .text, .rodata) 0x08048000 Unused

VM as a Tool for Memory Protection Extend PTEs with permission bits Page fault handler checks these before remapping If violated, send process SIGSEGV (segmentation fault) Physical Address Space Process i: SUP READ WRITE Address VP 0: No Yes No PP 6 VP 1: No Yes Yes PP 4 PP 2 VP 2: Yes Yes Yes PP 2 • PP 4 PP 6 Process j: SUP READ WRITE Address PP 8 VP 0: No Yes No PP 9 PP 9 VP 1: Yes Yes Yes PP 6 VP 2: No Yes Yes PP 11 PP 11

Address Translation: Page Hit 2 CPU Chip Cache/ Memory PTEA MMU 1 PTE CPU VA 3 PA 4 Data 5 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor

Address Translation: Page Fault Exception Page fault handler 4 2 CPU Chip Cache/ Memory Disk PTEA Victim page MMU 1 5 CPU VA PTE 3 7 New page 6 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction

Speeding up Translation with a TLB Page table entries (PTEs) are cached in L1/L2 like any other memory word PTEs may be evicted by other data references PTE hit still requires a 1-cycle delay Solution: Translation Lookaside Buffer (TLB) Small & fast hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages

TLB Hit A TLB hit eliminates a memory access CPU Chip TLB Cache/ MMU 2 PTE VPN 3 Cache/ Memory MMU 1 CPU VA PA 4 Data 5 A TLB hit eliminates a memory access

TLB Miss CPU Chip TLB 4 2 PTE VPN Cache/ Memory MMU 1 3 CPU VA PTEA PA 5 Data 6 A TLB miss incurs an add’l memory access (the PTE) Fortunately, TLB misses are rare (WHY?)

From virtual address to memory location Translation Lookaside Buffer (TLB) is a special fast cache just for the page table. Can be fully associative. CPU Virtual address TLB Physical address hit cache Main memory (page table) miss

Translation Lookaside Buffer Virtual to Physical translations are cached in a TLB.

What Happens on a Context Switch? Page table is per process So is TLB TLB flush TLB tagging

Review of Abbreviations Components of the virtual address (VA) TLBI: TLB index TLBT: TLB tag VPO: virtual page offset VPN: virtual page number Components of the physical address (PA) PPO: physical page offset (same as VPO) PPN: physical page number CO: byte offset within cache line CI: cache index CT: cache tag

Simple Memory System Example Addressing 14-bit virtual addresses 12-bit physical address Page size = 64 bytes 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO Virtual Page Number Virtual Page Offset 11 10 9 8 7 6 5 4 3 2 1 PPN PPO Physical Page Number Physical Page Offset

Simple Memory System Page Table Only show first 16 entries (out of 256) VPN PPN Valid VPN PPN Valid 00 28 1 08 13 1 01 – 09 17 1 02 33 1 0A 09 1 03 02 1 0B – 04 – 0C – 05 16 1 0D 2D 1 06 – 0E 11 1 07 – 0F 0D 1

Simple Memory System TLB 16 entries 4-way associative TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO Set Tag PPN Valid Tag PPN Valid Tag PPN Valid Tag PPN Valid 03 – 09 0D 1 00 – 07 02 1 1 03 2D 1 02 – 04 – 0A – 2 02 – 08 – 06 – 03 – 3 07 – 03 0D 1 0A 34 1 02 –

Simple Memory System Cache 16 lines, 4-byte block size Physically addressed Direct mapped CT CI CO 11 10 9 8 7 6 5 4 3 2 1 PPN PPO Idx Tag Valid B0 B1 B2 B3 Idx Tag Valid B0 B1 B2 B3 19 1 99 11 23 11 8 24 1 3A 00 51 89 1 15 – – – – 9 2D – – – – 2 1B 1 00 02 04 08 A 2D 1 93 15 DA 3B 3 36 – – – – B 0B – – – – 4 32 1 43 6D 8F 09 C 12 – – – – 5 0D 1 36 72 F0 1D D 16 1 04 96 34 15 6 31 – – – – E 13 1 83 77 1B D3 7 16 1 11 C2 DF 03 F 14 – – – –

Address Translation Example #1 Virtual Address: 0x03D4 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO ___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 1 1 1 1 1 VPN VPO 0x0F 3 0x03 Y N 0x0D CT CI CO 11 10 9 8 7 6 5 4 3 2 1 1 PPN PPO 0x5 0x0D Y 0x36

Address Translation Example #2 Virtual Address: 0x0020 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 VPN VPO 0x00 0x00 N N 0x28 CT CI CO 11 10 9 8 7 6 5 4 3 2 1 1 PPN PPO 0x8 0x28 N Mem

Address Translation Example #3 Virtual Address: 0x0B8F VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO ___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 1 1 1 1 1 1 1 VPN VPO 0x2E 2 0x0B N Y TBD CT CI CO 11 10 9 8 7 6 5 4 3 2 1 PPN PPO

Summary Programmer’s view of virtual address space Each process has its own private contiguous linear address space Cannot be corrupted by other processes System view of VAS & virtual memory Uses memory efficiently by caching virtual memory pages Efficient only because of locality Simplifies memory management and programming Simplifies protection by providing a convenient interpositioning point to check permissions

Allocating Virtual Pages Example: Allocating VP5 Physical memory (DRAM) Physical page number or disk address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Allocating Virtual Pages Example: Allocating VP 5 Kernel allocates VP 5 on disk and points PTE 5 to it Physical memory (DRAM) Physical page number or disk address PP 0 Valid VP 1 VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 5 VP 6 VP 7

Page Tables Size Given: How big is the page table? 4KB (212) page size 48-bit address space 4-byte PTE How big is the page table?

Multi-Level Page Tables Given: 4KB (212) page size 48-bit address space 4-byte PTE Problem: Would need a 256 GB page table! 248 * 2-12 * 22 = 238 bytes Common solution Multi-level page tables Example: 2-level page table Level 1 table: each PTE points to a page table Level 2 table: each PTE points to a page (paged in and out like other data) Level 1 table stays in memory Level 2 tables paged in and out Level 2 Tables Level 1 Table ... ...

A Two-Level Page Table Hierarchy page tables Virtual address space VP 0 ... PTE 0 PTE 0 VP 1023 2K allocated VM pages for code and data ... PTE 1 VP 1024 PTE 1023 PTE 2 (null) ... PTE 3 (null) VP 2047 PTE 4 (null) PTE 0 Gap PTE 5 (null) ... PTE 6 (null) PTE 1023 6K unallocated VM pages PTE 7 (null) PTE 8 1023 null PTEs (1K - 9) null PTEs PTE 1023 1023 unallocated pages 1023 unallocated pages 1 allocated VM page for the stack VP 9215 ...

Translating with a k-level Page Table Virtual Address n-1 p-1 VPN 1 VPN 2 ... VPN k VPO Level 1 page table Level 2 page table Level k page table ... ... PPN m-1 p-1 PPN PPO Physical Address

x86-64 Paging Origin Requirements AMD’s way of extending x86 to 64-bit instruction set Intel has followed with “EM64T” Requirements 48-bit virtual address 256 terabytes (TB) Not yet ready for full 64 bits Nobody can buy that much DRAM yet Mapping tables would be huge 52-bit physical address = 40 bits for PPN Requires 64-bit table entries Keep traditional x86 4KB page size, and same size for page tables (4096 bytes per PT) / (8 bytes per PTE) = only 512 entries per page

QuickPath interconnect 32 GB/s total (shared by all cores) Intel Core i7 Core x4 Registers Instruction fetch MMU (addr translation) L1 d-cache 32 KB, 8-way L1 i-cache 32 KB, 8-way L1 d-TLB 64 entries, 4-way L1 i-TLB 128 entries, 4-way L2 unified cache 256 KB, 8-way L2 unified TLB 512 entries, 4-way To other cores QuickPath interconnect 4 links @ 25.6 GB/s 102.4 GB/s total To I/O bridge L3 unified cache 8 MB, 16-way (shared by all cores) DDR3 Memory controller 3 x 64 bit @ 10.66 GB/s 32 GB/s total (shared by all cores) Processor package Main memory

Intel Core i7 How many caches (including TLBs) are on this chip? High end of Intel “core” brand, 731M transistors, 1366 pins. Quadcore Core i7 announced late 2008, six-core addition was launched in March 2010

Review of Abbreviations Components of the virtual address (VA) TLBI: TLB index TLBT: TLB tag VPO: virtual page offset VPN: virtual page number Components of the physical address (PA) PPO: physical page offset (same as VPO) PPN: physical page number CO: byte offset within cache line CI: cache index CT: cache tag

Overview of Core i7 Address Translation 32/64 L2, L3 and main memory CPU result Virtual address (VA) 36 12 VPN VPO L1 miss L1 hit 32 4 TLBT TLBI L1 d-cache (64 sets, 8 lines/set) TLB hit TLB miss ... ... L1 TLB (16 sets, 4 entries/set) 9 9 9 9 VPN1 VPN2 VPN3 VPN4 40 12 40 6 6 PPN PPO CT CI CO CR3 Physical address (PA) PTE PTE PTE PTE Page tables

page table translation TLB Translation Partition VPN into TLBT and TLBI. Is the PTE for VPN cached in set TLBI? Yes: Check permissions, build physical address No: Read PTE (and others as necessary) from memory and build physical address CPU virtual address 36 12 VPN VPO 32 4 TLBT TLBI 1 2 TLB hit TLB miss PTE PTE 3 ... 40 12 partial TLB hit PPN PPO physical address page table translation 4

TLB Miss: Page Table Translation Virtual address 9 9 9 9 12 VPN1 VPN2 VPN3 VPN4 VPO Page global directory Page upper directory Page middle directory Page table CR3 L1 PTE L2 PTE L3 PTE L4 PTE 40 12 PPN PPO Physical address

BONUS SLIDES

Page table physical base addr Page physical base address PTE Formats Page table physical base addr Unused G PS A CD WT U/S R/W P=1 51 12 11 9 8 7 6 5 4 3 2 1 52 XD 63 62 Available for OS (page table location on disk) P=0 P: Page table is present in memory R/W: read-only or read+write U/S: user or supervisor mode access WT: write-through or write-back cache policy for this page table CD: cache disabled or enabled A: accessed (set by MMU on reads and writes, cleared by OS) D: dirty (set by MMU on writes, cleared by OS) PS: page size 4K (0) or 4MB (1), For level 1 PTE only G: global page (don’t evict from TLB on task switch) Page table physical base address: 40 most significant bits of physical page table address XD: disable or enable instruction fetches from this page Level 1-3 PTE Level 4 PTE Page physical base address Unused G D A CD WT U/S R/W P=1 51 12 11 9 8 7 6 5 4 3 2 1 Available for OS (page location on disk) P=0 52 XD 63 62

L1 Cache Access Partition physical address: CO, CI, and CT Use CT to determine if line containing word at address PA is cached in set CI No: check L2 Yes: extract word at byte offset CO and return to processor 32/64 L2, L3 and main memory data L1 miss L1 hit L1 d-cache (64 sets, 8 lines/set) ... 40 6 6 physical address (PA) CT CI CO

Speeding Up L1 Access: A “Neat Trick” Tag Check 40 6 6 CT CI CO Physical address (PA) PPN PPO Address Translation No Change CI Virtual address (VA) VPN VPO 36 12 Observation Bits that determine CI identical in virtual and physical address Can index into cache while address translation taking place Generally we hit in TLB, so PPN bits (CT bits) available quickly “Virtually indexed, physically tagged” Cache carefully sized to make this possible

Linux VM “Areas” pgd: vm_prot: vm_flags process virtual memory vm_area_struct task_struct mm_struct vm_end vm_start mm pgd vm_prot vm_flags mmap shared libraries vm_next 0x40000000 vm_end vm_start pgd: Address of level 1 page table vm_prot: Read/write permissions for all pages in this area vm_flags Shared/private status of all pages in this area data vm_prot vm_flags 0x0804a020 text vm_next vm_end vm_start 0x08048000 vm_prot vm_flags vm_next

Linux Page Fault Handling Is the VA legal? i.e., Is it in area defined by a vm_area_struct? If not (#1), then signal segmentation violation Is the operation legal? i.e., Can the process read/write this area? If not (#2), then signal protection violation Otherwise Valid address (#3): handle fault process virtual memory vm_area_struct vm_end vm_start vm_prot vm_flags shared libraries vm_next 1 read vm_end 3 vm_start data vm_prot read vm_flags 2 text vm_next write vm_end vm_start vm_prot vm_flags vm_next

Memory Mapping Creation of new VM area done via “memory mapping” Create new vm_area_struct and page tables for area Area can be backed by (i.e., get its initial values from) : Regular file on disk (e.g., an executable object file) Initial page bytes come from a section of a file Nothing (e.g., .bss) aka “anonymous file” First fault will allocate a physical page full of 0's (demand-zero) Once the page is written to (dirtied), it is like any other page Dirty pages are swapped back and forth between a special swap file. Key point: no virtual pages are copied into physical memory until they are referenced! Known as “demand paging” Crucial for time and space efficiency

User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset) start (or address chosen by kernel) len bytes offset (bytes) len bytes Disk file specified by file descriptor fd Process virtual memory

User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset) Map len bytes starting at offset offset of the file specified by file description fd, preferably at address start start: may be 0 for “pick an address” prot: PROT_READ, PROT_WRITE, ... flags: MAP_PRIVATE, MAP_SHARED, ... Return a pointer to start of mapped area (may not be start) Example: fast file-copy Useful for applications like Web servers that need to quickly copy files. mmap()allows file transfers without copying into user space.

mmap() Example: Fast File Copy #include <unistd.h> #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> /* * a program that uses mmap to copy * the file input.txt to stdout */ int main() { struct stat stat; int i, fd, size; char *bufp; /* open the file & get its size */ fd = open("./input.txt", O_RDONLY); fstat(fd, &stat); size = stat.st_size; /* map the file to a new VM area */ bufp = mmap(0, size, PROT_READ, MAP_PRIVATE, fd, 0); /* write the VM area to stdout */ write(1, bufp, size); exit(0); }

Exec() Revisited To run a new program p in the current process using exec(): Free vm_area_struct’s and page tables for old areas Create new vm_area_struct’s and page tables for new areas Stack, BSS, data, text, shared libs. Text and data backed by ELF executable object file BSS and stack initialized to zero Set PC to entry point in .text Linux will fault in code, data pages as needed process-specific data structures (page tables, task and mm structs) physical memory same for each process kernel code/data/stack kernel VM 0xc0… %esp stack demand-zero process VM Memory mapped region for shared libraries .data .text libc.so brk runtime heap (via malloc) uninitialized data (.bss) demand-zero initialized data (.data) .data program text (.text) .text forbidden p

Fork() Revisited To create a new process using fork(): Net result: Make copies of the old process’s mm_struct, vm_area_struct’s, and page tables. At this point the two processes share all of their pages. How to get separate spaces without copying all the virtual pages from one space to another? “Copy on Write” (COW) technique. Copy-on-write Mark PTE's of writeable areas as read-only Writes by either process to these pages will cause page faults Flag vm_area_struct’s for these areas as private “copy-on-write” Fault handler recognizes copy-on-write, makes a copy of the page, and restores write permissions. Net result: Copies are deferred until absolutely necessary (i.e., when one of the processes tries to modify a shared page).

Discussion How does the kernel manage stack growth? How does the kernel manage heap growth? How does the kernel manage dynamic libraries? How can multiple user processes share writable data? How can mmap be used to access file contents in arbitrary (non-sequential) order?

9.9: Dynamic Memory Allocation Motivation Size of data structures may be known only at runtime Essentials Heap: demand-zero memory immediately after bss area, grows upward Allocator manages heap as collection of variable sized blocks. Two styles of allocators: Explicit: allocation and freeing both explicit C (malloc and free), C++ (new and free) Implicit: allocation explicit, freeing implicit Java, Lisp, ML Garbage collection: automatically freeing unused blocks Tradeoffs: ease of (correct) use, runtime overhead

Heap Management %esp the “brk” ptr memory protected from user code kernel virtual memory stack %esp Allocators request additional heap memory from the kernel using the sbrk() function: error = sbrk(amt_more) the “brk” ptr run-time heap (via malloc) uninitialized data (.bss) initialized data (.data) program text (.text)

Heap Management Classic CS problem Specific issues to consider Handle arbitrary request sequence Respond immediately to allocation requests Meet alignment requirements Avoid modifying allocated blocks Maximize throughput and memory utilization Avoid fragmentation Specific issues to consider How are free blocks tracked? Which free block to pick for next allocation? What to do with remainder of free block when part allocated? How to coalesce freed blocks?

Heap Management Block format Allocators typically maintain header, optional padding Stepping beyond block bounds can really mess up allocator a = 1: allocated block a = 0: free block size: block size payload: application data (allocated blocks only) header size a payload Format of allocated and free blocks optional padding

9.10: Garbage Collection Related to dynamic memory allocation Garbage collection: automatically reclaiming allocated blocks that are no longer used Need arises when blocks are not explicitly freed Also a classic CS problem

9.11: Memory-Related Bugs Selected highlights Dereferencing bad pointers Reading uninitialized memory Overwriting memory Referencing nonexistent variables Freeing blocks multiple times Referencing freed blocks Failing to free blocks

Dereferencing Bad Pointers The classic scanf bug scanf(“%d”, val);

Reading Uninitialized Memory Assuming that heap data is initialized to zero /* return y = Ax */ int *matvec(int **A, int *x) { int *y = malloc(N*sizeof(int)); int i, j; for (i=0; i<N; i++) for (j=0; j<N; j++) y[i] += A[i][j]*x[j]; return y; }

Overwriting Memory Allocating the (possibly) wrong sized object int **p; p = malloc(N*sizeof(int)); for (i=0; i<N; i++) { p[i] = malloc(M*sizeof(int)); }

Overwriting Memory Off-by-one error int **p; p = malloc(N*sizeof(int *)); for (i=0; i<=N; i++) { p[i] = malloc(M*sizeof(int)); }

Overwriting Memory Not checking the max string size Basis for classic buffer overflow attacks 1988 Internet worm Modern attacks on Web servers AOL/Microsoft IM war char s[8]; int i; gets(s); /* reads “123456789” from stdin */

Overwriting Memory Referencing a pointer instead of the object it points to Code below intended to remove first item in a binary heap of *size items, then reheapify the remaining items. Problem: * and -- have equal precedence, associate r to l Programmer intended (*size)-- Compiler interprets as *(size--) int *BinheapDelete(int **binheap, int *size) { int *packet; packet = binheap[0]; binheap[0] = binheap[*size - 1]; *size--; Heapify(binheap, *size, 0); return(packet); }

Other Pointer Pitfalls Misunderstanding pointer arithmetic Code below intended to scan array of ints and return a pointer to the first occurrence of val. int *search(int *p, int val) { while (*p && *p != val) p += sizeof(int); return p; }

Referencing Nonexistent Variables Forgetting that local variables disappear when a function returns int *foo () { int val; return &val; }

Freeing Blocks Multiple Times Nasty! x = malloc(N*sizeof(int)); /* do some stuff with x */ free(x); y = malloc(M*sizeof(int)); /* do some stuff with y */ free(x);

Referencing Freed Blocks Evil! x = malloc(N*sizeof(int)); /* do some stuff with x */ free(x); ... y = malloc(M*sizeof(int)); for (i=0; i<M; i++) y[i] = x[i]++;

Failing to Free Blocks (Memory Leaks) Slow, long-term killer! foo() { int *x = malloc(N*sizeof(int)); ... return; }

Failing to Free Blocks (Memory Leaks) Freeing only part of a data structure struct list { int val; struct list *next; }; foo() { struct list *head = malloc(sizeof(struct list)); head->val = 0; head->next = NULL; /* create, manipulate rest of the list */ ... free(head); return; }

Before You Sell Book Back… Consider useful content of remaining chapters Chapter 10: System level I/O Unix file I/O Opening and closing files Reading and writing files Reading file metadata Sharing files I/O redirection Standard I/O

Chapter 11 Network programming Client-server programming model Networks Global IP internet: IP addresses, domain names, DNS servers Sockets Web servers

Chapter 12 Concurrent programming CP with processes (e.g., fork, exec, waitpid) CP with I/O multiplexing Ask kernel to suspend process, returning control when certain I/O events have occurred CP with threads, shared variables, semaphores for synchronization

Class Wrap-up Final exam in testing center: both days of finals Check testing center hours, days! 50 multiple choice questions Covers all chapters (4-7 questions each from chapters 2-9) 3 hour time limit: but unlikely to use all of it Review midterm solutions, chapter review questions Remember: Final exam score replaces lower midterm scores 4 low quizzes will be dropped in computing overall quiz score Assignments Deadline for late labs is tomorrow (15 June). Notify instructor immediately if you are still working on a lab All submissions from here on out: send instructor an email

Reminder + Request From class syllabus: Must complete all labs to receive passing grade in class Must receive passing grade on final to pass class Please double check all scores on Blackboard Contact TA for problems with labs, homework Contact instructor for problems with posted exam or quiz scores

Parting Thought Again and again I admonish my students both in Europe and in America: “Don’t aim at success – the more you aim at it and make it a target, the more you are going to miss it. For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself or as the by-product of one’s surrender to a person other than oneself. Happiness must happen, and the same holds for success: you have to let it happen by not caring about it. I want you to listen to what your conscience commands you to do and go on to carry it out to the best of your knowledge. Then you will live to see that in the long run – in the long run, I say! – success will follow you precisely because you had forgotten to think of it.” Viktor Frankl