Virtual Memory Partially Adapted from:

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

16.317: Microprocessor System Design I
Memory Management Questions answered in this lecture: How do processes share memory? What is static relocation? What is dynamic relocation? What is segmentation?
OS Memory Addressing.
CS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Memory Management 2010.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
CSE451 Operating Systems Winter 2012 Memory Management Mark Zbikowski Gary Kimura.
Operating Systems ECE344 Ding Yuan Memory Management Lecture 7: Memory Management.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
PA0 due 60 hours. Lecture 4 Memory Management OSTEP Virtualization CPU: illusion of private CPU RAM: illusion of private memory Concurrency Persistence.
CSE 451: Operating Systems Fall 2010 Module 10 Memory Management Adapted from slides by Chia-Chi Teng.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Paging (continued) & Caching CS-3013 A-term Paging (continued) & Caching CS-3013 Operating Systems A-term 2008 (Slides include materials from Modern.
CSE 451: Operating Systems Winter 2015 Module 11 Memory Management Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska, Levy,
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
OS Memory Addressing. Architecture CPU – Processing units – Caches – Interrupt controllers – MMU Memory Interconnect North bridge South bridge PCI, etc.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
CS703 - Advanced Operating Systems By Mr. Farhan Zaidi.
CS161 – Design and Architecture of Computer
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
From Address Translation to Demand Paging
CS703 - Advanced Operating Systems
Outline Paging Swapping and demand paging Virtual memory.
Paging COMP 755.
From Address Translation to Demand Paging
COMBINED PAGING AND SEGMENTATION
CSE451 Operating Systems Winter 2011
CSE 451: Operating Systems Winter 2010 Module 10 Memory Management
CSE 451: Operating Systems Winter 2014 Module 11 Memory Management
Paging and Segmentation
Chapter 8: Main Memory.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CS399 New Beginnings Jonathan Walpole.
MICROPROCESSOR MEMORY ORGANIZATION
Virtual Memory Hardware
Lecture 3: Main Memory.
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Winter 2004 Module 10.5 Segmentation
CSE 451: Operating Systems Autumn 2004 Memory Management
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
Chapter 8: Memory Management strategies
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2009 Module 10 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Lecture 7: Flexible Address Translation
Memory Management CSE451 Andrew Whitaker.
CSE 451: Operating Systems Winter 2004 Module 10 Memory Management
CSE 451: Operating Systems Lecture 10 Paging & TLBs
CS703 - Advanced Operating Systems
CSE451 Operating Systems Winter 2009
COMP755 Advanced Operating Systems
CSE 451: Operating Systems Autumn 2010 Module 10 Memory Management
Virtual Memory 1 1.
Presentation transcript:

Virtual Memory Partially Adapted from: © 2004-2007 Ed Lazowska, Hank Levy, Andrea And Remzi Arpaci-Dussea, Michael Swift

A system with physical addressing Main memory - An array of M contiguous byte-sized cells, each with a unique physical address Physical addressing Most natural way to access it – Addresses generated by the CPU correspond to bytes in it Used in simple systems like early PCs and embedded microcontrollers (e.g. cars and elevators) Main memory 0: 1: M -1: 2: 3: 4: 5: 6: 7: 8: ... Physical address (PA) 4 CPU Main memory is organized as an array of contiguous byte-sized cells, starting at address 0; given that, physical addressing is the most natural way for the CPU to use it. 2 Data word 2

Overcommitting Memory Example: Set of processes frequently referencing 33 important pages Only 32 frames in physical memory System repeats cycle Reference page not in memory Replace a page in memory with newly referenced page Thrashing System reading and writing pages instead of executing useful instructions Average memory access time equals disk access time Illusion breaks: Memory appears slow as disk rather than disks appearing fast as memory Add more processes, thrashing gets worse

Virtual Memory The basic abstraction that the OS provides for memory management is virtual memory (VM) VM enables programs to execute without requiring their entire address space to be resident in physical memory program can also execute on machines with less RAM than it “needs” many programs don’t need all of their code or data at once (or ever) e.g., branches they never take, or data they never read/write no need to allocate memory for it, OS should adjust amount allocated based on its run-time behavior virtual memory isolates processes from each other one process cannot name addresses visible to others; each process has its own isolated address space VM requires hardware and OS support MMU’s, TLB’s, page tables, ...

Working Set Many applications have a common memory access pattern Accessed in a common order over a window of time Tied closely to the idea of a computational kernel The code that is the “heart” of an application Executes for the majority of time The Working Set The memory itself accessed as part of the pattern Working set is usually much smaller than all memory allocated Ideally: A fixed region of memory of a certain size Not so clean in the real world Most of the time memory accesses will be to the working set OS tries to make sure they are always available

Virtual Memory Features Translation: Ability to translate accesses from one address space (virtual to a different one (physical) When translation exists, processor uses virtual addresses, physical memory uses physical addresses Side effects: Can be used to avoid overlap Can be used to give uniform view of memory to programs Protection: Prevent access to private memory of other processes Different pages of memory can be given special behavior (Read Only, Invisible to user programs, etc). Kernel data protected from User programs Programs protected from themselves

Hardware Support Two operating modes Privileged (protected, kernel) mode: OS context Result of OS invocation (system call, interrupt, exception) Allows execution of privileged instructions Allows access to all of memory (sort of) User Mode: Process context Only access resources (memory) in its context (address space)

A system with virtual addressing Modern processors use virtual addresses CPU generates virtual address and address translation is done by dedicated hardware (memory management unit) via OS-managed lookup table Main memory CPU chip 0: Address translation Virtual address (VA) 4100 Physical address (PA) 4 1: 2: CPU MMU 3: 4: 5: 6: 7: ... M-1: Data word 8 8

Virtual Addresses Virtual addresses are independent of location in physical memory (RAM) that referenced data lives OS determines location in physical memory instructions issued by CPU reference virtual addresses e.g., pointers, arguments to load/store instruction, PC, ... virtual addresses are translated by hardware into physical addresses (with some help from OS) The set of virtual addresses a process can reference is its address space many different possible mechanisms for translating virtual addresses to physical addresses In reality, an address space is a data structure in the kernel Typically called a Memory Map

Linux Memory Map Each process has its own address space specified with a memory map struct mm_struct; Allocated regions of the address space are represented using a vm_area_struct (vma) Recall: A process has a sparse address space Includes: Start address (virtual) End address (first address after vma) Memory regions are page aligned Protection (read, write, execute, etc…) Different page protections means new vma Pointer to file (if one) References to memory backing the region Other bookkeeping

Memory map Linked list of memory regions assigned to valid virtual addresses

Memory mapping Each region has a specific use Loaded executable segments Stack Heap Memory mapped files Modifying the memory map yourself mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); munmap(void *start, size_t length);

Instantiating the memory map The OS must translate the memory map to a hardware configuration Policy that the CPU can enforce Two main hardware techniques for doing this Segmentation Each region is stored as a single entity Paging Regions are broken into fixed sized memory units (pages)

Segmentation Segmentation partitions memory into logical units stack, code, heap, ... on a segmented machine, a VA is <segment #, offset> segments are units of memory, from the user’s perspective segmentation = many segments/process Segments Regions of contiguous memory Defined by: Base address – Physical Start address of the segment Limit – Size of the segment (Maximum value of virtual address) Hardware support: multiple base/limit pairs, one per segment stored in a segment table segments named by segment #, used as index into table

Using Segments Divide address space into logical segments Each logical segment can be in separate part of physical memory Separate base and limit for each segment (+ protection bits) Read and write bits for each segment How to designate segment? Use part of logical address Top bits of logical address select segment Low bits of logical address select offset within segment Implicitly by type of memory reference Code vs. Data segments Special registers

Segment lookups Segment Table: Base and limit for every segment in process

x86 Segments CS = Code Segment DS = Data Segment SS = Stack Segment ES, FS, GS = Auxiliary segments Explicit or implicitly specified by instructions Accessed via special registers 16 bit “Selectors” Identify the segment to the hardware MMU Functionality depends on CPU operating mode

Protected Mode (32 bits) Segment information stored as a table GDT (Global Descriptor Table) Where is the GDT? Array of segment descriptions (base, limit, flags) Segment registers now indicate array index Segment registers select segment descriptor E.g. CS points to Code Segment descriptor in GDT Registers are 16 bits

Linear address calculation

Segment descriptors

Segmentation Registers

Pros and Cons of Segmentation Advantages Supports dynamic relocation of address spaces Supports protection across multiple address spaces Cheap: Few registers and little logic Fast: Add and Compare is easy Disadvantages Each process must be allocated contiguously in real memory Fragmentation: Cannot allocate a new process Must allocate memory that may not be used No Sharing: Cannot share limited memory regions

Pages Break memory up into fixed sized chunks Fast to allocate and free Easier to translate, less translations Mostly 4KB, but not always 32 bit x86 supports 4KB and 4MB 64 bit x86 supports 4KB, 2MB and 1GB

Paging Translating virtual addresses Page tables a virtual address has two parts: virtual page number & offset virtual page number (VPN) is index into a page table page table entry contains page frame number (PFN) physical address is PFN::offset Page tables managed by the OS map virtual page number (VPN) to page frame number (PFN) VPN is simply an index into the page table one page table entry (PTE) per page in virtual address space i.e., one PTE per VPN

Paging with Large Address Spaces Mapping of logical addresses to physical memory Page table for process Base Address CTRL bits 1 0 1 4 1 1 … skipped entries.. 0 0 6 10 Free Page Free Page Physical Memory

Page tables Memory resident page table Virtual Page Number (physical page or disk address)‏ Virtual Page Number Physical Memory Valid 1 1 1 1 1 1 Disk Storage (swap file or regular file system file)‏ 1 26 26

Page Translation 4K Pages How are virtual addresses translated to physical addresses Upper bits of address designate page number Page Number Page Offset Page Base Address Page Table Virtual Address Physical 20 Bits 12 Bits 4K Pages No comparison or addition: Table lookup and bit substitution 1 page table per process: One entry per page in address space Base address of each page in physical memory Read/Write protection bits How many entries in page table?

Paging Advantages Easy to allocate physical memory Just grab a page anywhere in memory that is available Free pages are stored together in a linked list To free a page, just put it back on the list External fragmentation is not a problem Large contiguous virtual address regions are easy to map Don’t even need to map the entire region Easy to “page out” chunks of programs all chunks are the same size (page size) use valid bit to detect references to “paged-out” pages also, page sizes are usually chosen to be convenient multiples of disk block sizes

Paging Disadvantages Can have internal fragmentation process may not allocate memory in multiples of page size Memory reference overhead 2 references per address lookup (page table, then memory) Solution: use a hardware cache to absorb page table lookups translation lookaside buffer (TLB) – next class Page tables are stored in memory need one page table entry per page in the virtual address space 32bit x86: 8KB page table required to map 4MB of address space 64bit x86: 16KB page table required to map 2MB of address space Page tables can use a lot of memory

Combining Segmentation and Paging x86 (32bit) architecture supports both segments and paging Use segments to manage logically related units stack, file, module, heap, …? segment vary in size, but usually large (multiple pages) Can manage policy at a single location Use pages to partition segments into fixed chunks Separates translation from protection no external fragmentation segments are “pageable”- don’t need entire segment in memory at same time Linux: 1 kernel code segment, 1 kernel data segment 1 user code segment, 1 user data segment 1 task state segments (stores registers on context switch) 1 “local descriptor table” segment (not really used) all of these segments are paged

Implementing Segmentation and Paging IBM System 370 Segment # (4 bits) Page # (8 bits) Page Offset (12 bits) X86 Physical Memory Physical Address Page Tables (aka Virtual Address)