Paging Andrew Whitaker CSE451.

Slides:



Advertisements
Similar presentations
Paging, Page Tables, and Such Andrew Whitaker CSE451.
Advertisements

Paging Andrew Whitaker CSE451.
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
Virtual Memory. The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
CS 153 Design of Operating Systems Spring 2015
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
CS 333 Introduction to Operating Systems Class 11 – Virtual Memory (1)
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
CSE451 Introduction to Operating Systems Spring 2007 Module 12 Memory Management Hardware Support Gary Kimura & Mark Zbikowski April 27, 2007.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
Lecture 19: Virtual Memory
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
IT 344: Operating Systems Winter 2008 Module 12 Page Table Management, TLBs, and Other Pragmatics Chia-Chi Teng CTB 265.
CS399 New Beginnings Jonathan Walpole. Virtual Memory (1)
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
CS203 – Advanced Computer Architecture Virtual Memory.
W4118 Operating Systems Instructor: Junfeng Yang.
Chapter 19 Translation Lookaside Buffer
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Virtual Memory - Part II
From Address Translation to Demand Paging
CS703 - Advanced Operating Systems
From Address Translation to Demand Paging
143A: Principles of Operating Systems Lecture 6: Address translation (Paging) Anton Burtsev October, 2017.
Page Table Implementation
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
© 2012 Gribble, Lazowska, Levy, Zahorjan
CSE 120 Principles of Operating
CS510 Operating System Foundations
CSE 451: Operating Systems Winter 2011 Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
Evolution in Memory Management Techniques
EECE.4810/EECE.5730 Operating Systems
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
© 2004 Ed Lazowska & Hank Levy
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management CSE451 Andrew Whitaker.
Lecture 8: Efficient Address Translation
IT 344: Operating Systems Module 12 Page Table Management, TLBs, and Other Pragmatics Chia-Chi Teng CTB
Paging and Segmentation
CSE 451: Operating Systems Lecture 10 Paging & TLBs
CS703 - Advanced Operating Systems
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache writes and examples
CSE 451: Operating Systems Winter 2012 Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
Review What are the advantages/disadvantages of pages versus segments?
CSE 451: Operating Systems Winter 2006 Module 12 Page Table Management, TLBs, and Other Pragmatics Ed Lazowska Allen Center.
CSE 451: Operating Systems Autumn 2010 Module 12 Page Table Management, TLBs, and Other Pragmatics Ed Lazowska Allen Center.
CSE 451: Operating Systems Winter 2009 Module 11a Page Table Management, TLBs, and Other Pragmatics Mark Zbikowski Gary Kimura 1.
Presentation transcript:

Paging Andrew Whitaker CSE451

Review: Process (Virtual) Address Space user space kernel space Each process has its own address space The OS and the hardware translate virtual addresses to physical frames

Multiple Processes Each process has its own address space user space kernel space proc1 proc2 Note: threads share page tables. Each process has its own address space And, its own set of page tables Kernel mappings are the same for all

Linux Physical Memory Layout

Paging Issues Memory scarcity Making Paging Fast Virtual memory, stay tuned… Making Paging Fast Reducing the Overhead of Page Tables

Review: Mechanics of address translation virtual address virtual page # offset physical memory page frame 0 page table page frame 1 physical address page frame 2 page frame # page frame # offset page frame 3 … page frame Y Problem: page tables live in memory

Making Paging Fast We must avoid a page table lookup for every memory reference This would double memory access time Solution: Translation Lookaside Buffer Fancy name for a cache TLB stores a subset of PTEs (page table translation entries) TLBs are small and fast (16-48 entries) Can be accessed “for free”

TLB Details In practice, most (> 99%) of memory translations handled by the TLB Each processor has its own TLB TLB is fully associative Any TLB slot can hold any PTE entry Who fills the TLB? Two options: Hardware (x86) walks the page table on a TLB miss Software (MIPS, Alpha) routine fills the TLB on a miss TLB itself needs a replacement policy Usually implemented in hardware (LRU) Again, we’re taking advantage of the principal of locality here. If the program accessed its address space willy-nilly, then caching does not work, including TLB caching. Advantage of hardware filled == speed Advantage of software filled == flexibility

What Happens on a Context Switch? Each process has its own address space So, each process has its own page table So, page-table entries are only relevant for a particular process Thus, the TLB must be flushed on a context switch This is why context switches are so expensive

Alternative to flushing: Address Space IDs We can avoid flushing the TLB if entries are associated with an address space When would this work well? When would this not work well? 4 1 1 1 2 20 ASID V R M prot page frame number

TLBs with Multiprocessors page table TLB 1 page frame # TLB 2 Each TLB stores a subset of page table state Must keep state consistent on a multiprocessor

Today’s Topics Page Replacement Strategies Making Paging Fast Reducing the Overhead of Page Tables

Page Table Overhead For large address space, page table sizes can become enormous Example: IA64 architecture 64 bit address space, 8KB pages Num PTEs = 2^64 / 2^13 = 2^51 Assuming 8 bytes per PTE: Num Bytes = 2^54 = 16 Petabytes And, this is per-process!

Optimizing for Sparse Address Spaces Observation: very little of the address space is in use at a given time Basic idea: only allocate page tables where we need to And, fill in new page tables lazily (on demand) virtual address space

Implementing Sparse Address Spaces We need a data structure to keep track of the page tables we have allocated And, this structure must be small Otherwise, we’ve defeated our original goal Solution: multi-level page tables Page tables of page tables “Any problem in CS can be solved with a layer of indirection”

Two level page tables virtual address master page # secondary page# offset physical memory page frame 0 master page table physical address page frame # offset page frame 1 secondary page table secondary page table page frame 2 page frame 3 empty page frame number empty … page frame Y Key point: not all secondary page tables must be allocated

Generalizing Early architectures used 1-level page tables VAX, x86 used 2-level page tables SPARC uses 3-level page tables Alpha 68030 uses 4-level page tables Key thing is that the outer level must be wired down (pinned in physical memory) in order to break the recursion

Cool Paging Tricks Basic Idea: exploit the layer of indirection between virtual and physical memory

Trick #1: Shared Libraries Q: How can we avoid 1000 copies of printf? A: Shared libraries Linux: /usr/lib/*.so Firefox Open Office libc libc libc Physical memory

Shared Memory Segments Virt Address space 1 Virt Address space 2 Physical memory

Trick #2: Copy-on-write Copy-on-write allows for a fast “copy” by using shared pages Especially useful for “fork” operations Implementation: pages are shared “read-only” OS intercepts write operations, makes a real copy V R M prot page frame number

Trick #3: Memory-mapped Files Normally, files are accessed with system calls Open, read, write, close Memory mapping allows a program to access a file with load/store operations Virt Address space Foo.txt