CS 153 Design of Operating Systems Spring 2015

Slides:



Advertisements
Similar presentations
Virtual Memory Basics.
Advertisements

Paging Andrew Whitaker CSE451.
16.317: Microprocessor System Design I
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Chapter 3.2 : Virtual Memory
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
CS 241 Section Week #12 (04/22/10).
CSE451 Introduction to Operating Systems Spring 2007 Module 12 Memory Management Hardware Support Gary Kimura & Mark Zbikowski April 27, 2007.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
Review of Memory Management, Virtual Memory CS448.
Operating Systems ECE344 Ding Yuan Memory Management Lecture 7: Memory Management.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Lecture 11 Page 1 CS 111 Online Memory Management: Paging and Virtual Memory CS 111 On-Line MS Program Operating Systems Peter Reiher.
CE Operating Systems Lecture 14 Memory management.
IT 344: Operating Systems Winter 2008 Module 12 Page Table Management, TLBs, and Other Pragmatics Chia-Chi Teng CTB 265.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
From Address Translation to Demand Paging
CS703 - Advanced Operating Systems
Outline Paging Swapping and demand paging Virtual memory.
CSE 120 Principles of Operating
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory 3 Hakim Weatherspoon CS 3410, Spring 2011
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CSE 451: Operating Systems Winter 2011 Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
Virtual Memory Hardware
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
IT 344: Operating Systems Module 12 Page Table Management, TLBs, and Other Pragmatics Chia-Chi Teng CTB
Paging and Segmentation
CSE 451: Operating Systems Lecture 10 Paging & TLBs
CS703 - Advanced Operating Systems
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
CSE 153 Design of Operating Systems Winter 2019
CSE 451: Operating Systems Winter 2012 Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
CSE 451: Operating Systems Winter 2006 Module 12 Page Table Management, TLBs, and Other Pragmatics Ed Lazowska Allen Center.
CSE 451: Operating Systems Winter 2009 Module 11a Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
Paging Andrew Whitaker CSE451.
Presentation transcript:

CS 153 Design of Operating Systems Spring 2015 Lecture 16: Memory Management and Paging

Announcement Homework 2 is out To be posted on ilearn today Due in a week (the end of May 13th).

Recap: Fixed Partitions Physical Memory Base Register P1 P4’s Base P2 P3 Virtual Address Offset + P4 P5

Recap: Paging Paging solves the external fragmentation problem by using fixed sized units in both physical and virtual memory Physical Memory Virtual Memory Page 1 Page 2 Page 3 Page N

Segmentation Segmentation is a technique that partitions memory into logically related data units Module, procedure, stack, data, file, etc. Units of memory from user’s perspective Natural extension of variable-sized partitions Variable-sized partitions = 1 segment/process Segmentation = many segments/process Fixed partition : Paging :: Variable partition : Segmentation Hardware support Multiple base/limit pairs, one per segment (segment table) Segments named by #, used to index into table Virtual addresses become <segment #, offset>

Segmentation Program perspective Physical memory

Segment Lookups < + Segment Table Physical Memory limit base Offset Virtual Address Yes? < + No? Protection Fault

Segmentation Example Assuming we have 4 segments, we need 2 bits in virtual address to represent it 0x01234567 goes to segment #0, whose base is 0x01000000, then physical address is 0x02234567 Segmentation fault? 2 30 Seg# Offset

Segment Table Extensions Problems Can have one segment table per process Segment #s are then process-relative Can easily share memory Put same translation into base/limit pair Can share with different protections (same base/limit, diff protection) Problems Cross-segment addresses Segments need to have same #s for pointers to them to be shared among processes Large segment tables Keep in main memory, use hardware cache for speed Large segments Internal fragmentation, paging to/from disk is expensive Why have segments #s be process-relative?

Segmentation and Paging Can combine segmentation and paging The x86 supports segments and paging Use segments to manage logically related units Module, procedure, stack, file, data, etc. Segments vary in size, but usually large (multiple pages) Use pages to partition segments into fixed size chunks Makes segments easier to manage within physical memory Segments become “pageable” – rather than moving segments into and out of memory, just move page portions of segment Need to allocate page table entries only for those pieces of the segments that have themselves been allocated Tends to be complex…

Managing Page Tables Last lecture we computed the size of the page table for a 32-bit address space w/ 4K pages to be 4MB This is far far too much overhead for each process How can we reduce this overhead? Observation: Only need to map the portion of the address space actually being used (tiny fraction of entire addr space) How do we only map what is being used? Can dynamically extend page table Use another level of indirection: two-level page tables Does not work if addr space is sparse (internal fragmentation)

Two-Level Page Tables Two-level page tables Virtual addresses (VAs) have three parts: Master page number, secondary page number, and offset Master page table maps VAs to secondary page table Secondary page table maps page number to physical page Offset indicates where in physical page address is located

Page Lookups Physical Memory Virtual Address Page number Offset Physical Address Page Table Page frame Offset Page frame

Two-Level Page Tables Physical Memory Virtual Address Master page number Secondary Offset Physical Address Page table Page frame Offset Master Page Table Page frame Secondary Page Table

Example 4KB pages, 4 bytes/PTE How many bits in offset? 4K = 12 bits Want master page table in one page: 4K/4 bytes = 1K entries Hence, 1K secondary page tables How many bits? Master page number = 10 bits (because 1K entries) Offset = 12 bits Secondary page number = 32 – 10 – 12 = 10 bits

Addressing Page Tables Where do we store page tables (which address space)? Physical memory Easy to address, no translation required But, allocated page tables consume memory for lifetime of VAS Virtual memory (OS virtual address space) Cold (unused) page table pages can be paged out to disk But, addressing page tables requires translation How do we stop recursion? Do not page the outer page table (called wiring) If we’re going to page the page tables, might as well page the entire OS address space, too Need to wire special code and data (fault, interrupt handlers)

Page Table in Physical Memory Virtual Address Page number Offset Physical Address Page Table Page frame Offset Page frame Page table of P2 Page table of P1 Register storing page table’s physical addr Base Addr

Page Out the Page Table Pages What happens during a page table lookup? MMU needs to access “PT of P1” to look up the page table! But if it is paged out, how to look it up? Physical Memory Virtual Memory PT of P1 In-use Page 1 PT of P1 PT of P1 PT of P1 In-use Page N In-use Base Addr Register storing page table’s virtual addr

Two-level Paging Two-level paging reduces memory overhead of paging Only need one master page table and one secondary page table when a process begins As address space grows, allocate more secondary page tables and add PTEs to master page table What problem remains? Hint: what about memory lookups?

Efficient Translations Recall that our original page table scheme doubled the latency of doing memory lookups One lookup into the page table, another to fetch the data Now two-level page tables triple the latency! Two lookups into the page tables, a third to fetch the data And this assumes the page table is in memory How can we use paging but also have lookups cost about the same as fetching from memory? Cache translations in hardware Translation Lookaside Buffer (TLB) TLB managed by Memory Management Unit (MMU)

TLBs Translation Lookaside Buffers TLBs implemented in hardware Translate virtual page #s into PTEs (not physical addrs) Can be done in a single machine cycle TLBs implemented in hardware Fully associative cache (all entries looked up in parallel) Keys are virtual page numbers Values are PTEs (entries from page tables) With PTE + offset, can directly calculate physical address Why does this help? Exploits locality: Processes use only handful of pages at a time 16-48 entries/pages (64-192K) Only need those pages to be “mapped” Hit rates are therefore very important

TLB Hit A TLB hit eliminates one or more memory accesses CPU Chip TLB 2 PTE VPN 3 Cache/ Memory MMU 1 CPU VA PA 4 Data 5 A TLB hit eliminates one or more memory accesses

Managing TLBs Hit rate: Address translations for most instructions are handled using the TLB >99% of translations, but there are misses (TLB miss)… Who places translations into the TLB (loads the TLB)? Hardware (Memory Management Unit) [x86] Knows where page tables are in main memory OS maintains tables, HW accesses them directly Tables have to be in HW-defined format (inflexible) Software loaded TLB (OS) [MIPS, Alpha, Sparc, PowerPC] TLB faults to the OS, OS finds appropriate PTE, loads it in TLB Must be fast (but still 20-200 cycles) CPU ISA has instructions for manipulating TLB Tables can be in any format convenient for OS (flexible)

Managing TLBs (2) OS ensures that TLB and page tables are consistent When it changes the protection bits of a PTE, it needs to invalidate the PTE if it is in the TLB (special hardware instruction) Reload TLB on a process context switch Invalidate all entries Why? Who does it? When the TLB misses and a new PTE has to be loaded, a cached PTE must be evicted Choosing PTE to evict is called the TLB replacement policy Implemented in hardware, often simple, e.g., Least Recently Used (LRU)

Summary Virtual memory Optimizations Fixed partitions – easy to use, but internal fragmentation Variable partitions – more efficient, but external fragmentation Paging – use small, fixed size chunks, efficient for OS Segmentation – manage in chunks from user’s perspective Combine paging and segmentation to get benefits of both Optimizations Managing page tables (space) Efficient translations (TLBs) (time)

Next time… Read chapters 8 and 9 in either textbook