Virtual Memory Brian Bershad 1.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

May 7, A Real Problem  What if you wanted to run a program that needs more memory than you have?
1 A Real Problem  What if you wanted to run a program that needs more memory than you have?
CS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
Memory Management (II)
Memory Management 2010.
Chapter 3.2 : Virtual Memory
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Computer Organization and Architecture
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
Review of Memory Management, Virtual Memory CS448.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Chapter 4 Memory Management Virtual Memory.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
1 Virtual Memory zVM allows a program to run on a machine with less memory than it “needs”. zMany programs don’t need all of their code and data all at.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Paging (continued) & Caching CS-3013 A-term Paging (continued) & Caching CS-3013 Operating Systems A-term 2008 (Slides include materials from Modern.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Virtual Memory Chapter 8.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Non Contiguous Memory Allocation
Chapter 2 Memory and process management
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
Lecture 12 Virtual Memory.
A Real Problem What if you wanted to run a program that needs more memory than you have? September 11, 2018.
CS703 - Advanced Operating Systems
Paging COMP 755.
CS 704 Advanced Computer Architecture
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Chapter 8: Main Memory.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CS399 New Beginnings Jonathan Walpole.
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Virtual Memory Brian Bershad 1.
Memory Management CSE451 Andrew Whitaker.
Paging and Segmentation
Evolution in memory management techniques
CSE 451: Operating Systems Lecture 10 Paging & TLBs
CS703 - Advanced Operating Systems
Evolution in memory management techniques
Evolution in memory management techniques
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
COMP755 Advanced Operating Systems
Cache writes and examples
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory 1 1.
Presentation transcript:

Virtual Memory Brian Bershad 1

Virtual Memory VM allows a program to run with less than its full VAS in primary memory. So, a program can run on a machine with less memory than it “needs”. Can also run on a machine with “too much” physical memory. Many programs don’t need all of their code and data at once (or ever), so this saves space and might permit more programs to shared primary memory. The amount of real memory that a program needs to execute can be adjusted dynamically to suit the programs behavior. VM requires hardware support, as we saw previously, plus OS management algorithms.

VM VM should be completely invisible to the programmer. It’s the responsibility of the OS to load those parts of the program that are needed, and to remove from memory parts that are not needed. How it does this is a policy decision. “Virtual Memory” is not free -- it must exist somewhere. Those program pages that are not stored in primary memory must be stored on “elsewhere”.

The Memory Hierarchy

Issues History Mechanism Fetch strategies Replacement strategies what did “virtualized store” used to look like Mechanism understanding how to make something “not there” appear there. Fetch strategies WHEN to bring something into primary memory demand anticipatory Replacement strategies WHICH page to throw out when you fetch something

History Uniprogramming with overlays Multiprogramming manual no protection Multiprogramming more than 1 job in memory at a time fixed partition resulted in internal fragmentation variable partition external fragmentation protection base (and) bounds registers

Early Memory management used Fixed Partitions P1.Base P1 P2 Easy but limited. Add a processes virtual address to it’s base register. The size of each segment is fixed. Internal fragmentation within each allocated segment.

Variable partition was a step forward P.Base EMPTY P.Bounds P2 Each VA added to P.Base. Checked against P.Bounds. If less then, access allowed to derived physical address. P3 Could have external fragmentation.

Vanilla Multiprogramming Programs stay in main memory until done. Memory utilization was a problem external fragmentation meant we had enough, but not in the right place. processes could “hang out” for a long time and gobble up memory. Swapping was the answer. copy a process’s memory to disk free up the memory for another process. don’t run the swapped process until there is more memory or someone hits a key... Drastic, but simple.

Virtual Memory Separate generated address from referenced address P = F(V) where P is a physical address V is a virtual address F is an arbitrary function. Motivation have > 1 process in memory at a time F is process disjoint Allow sizeof(V) to be >> sizeof(P) F is many to one. Allow sizeof(P) to be >> sizeof(V) F is one to many Sharing F is many to many Protection

Dynamic Relocation Registers Associate with each process a base and bounds register Add base to virtual address If result is > bounds, fault. Reload relocation register on context switch. LD R3, @120 # load R3 with contents of memory location 120 FAULT bounds=11000 memory > + VA=120 10120 base = 10000

Segmentation Have > 1 base,bounds register pair per process call each a “segment” Split process into multiple segments a segment is a collection of logically related data code, module, stack, file Put the segment registers into a table associated with each process. Include in the virtual address the segment number through which you are referencing memory. Bonus: add protection bits per segment into the table No Access, Read, Write, Execute

Segment Table Virtual Address ok? Offset SEG # + BASE BOUNDS ACCESS 15 12 11 0 + BASE BOUNDS ACCESS How big can a segment be? Physical memory Segment table

The Segment Table Can have one segment table per process To share memory, just share by putting the same translation into the base and bounds pair. Can share with different protections. Cross-segment names can be tough to deal with Segments need to have the same names in multiple processes if you want to share pointers. If the segment table is big, should keep it in main memory but then access is slow. So, keep a subset of the referenceable segments in a small on-chip memory and look up translation there. can be either automatic or manual.

Paging + + Goals Make all chunks of memory the same size Virtual Address Goals make allocation and swapping easier Make all chunks of memory the same size call each chunk a “PAGE” example page sizes are 512 bytes, 1K, 4K, 8K, etc pages have been getting bigger with time we’ll discuss reasons why pages should be of a certain size as the week progresses. Page # Offset Page Table Page Table Base Register + + => physical address Each entry in the page table is a “Page Table Entry”

An Example Pages are 1024 bytes long PTBR contains 2000 this says that bottom 10 bits of the VA is the offset PTBR contains 2000 this says that the first page table entry for this process is at physical memory location 2000 Virtual address is 2256 this says “page 2, offset 256” Physical memory location 2004 contains 8192 this says that each PTE is 4 bytes (1 word) and that the second page of this process’s address space can be found at memory location 8192. So, we add 256 to 8192 and we get the real data!

What does a PTE contain? 1 1 1 1-2 about 20 M-bit R-bit V-bit Protection bits Page Frame Number The Modify bit says whether or not the page has been written. it is updated each time a WRITE to the page occurs. The Reference bit says whether or not the page has been touched it is updated each time a READ or a WRITE occurs The V bit says whether or not the PTE can be used it is checked each time the virtual address is used The Protection bits say what operations are allowed on this page READ, WRITE, EXECUTE The Page Frame Number says where in memory is the page

Evaluating Paging Easy to allocate memory memory comes from a free list of fixed size chunks. to find a new page, get anything off the free list. external fragmentation not a problem easy to swap out pieces of a program since all pieces are the same size. use valid bit to detect references to swapped pages pages are a nice multiple of the disk block size. Can still have internal fragmentation Table space can become a serious problem especially bad with small pages eg, w/a 32bit address space and 1k size pages, that’s 222 pages or that many ptes which is a lot! Memory reference overhead can be high 2 refs for every one

Segmentation and Paging at the Same Time Provide for two levels of mapping Use segments to contain logically related things code, data, stack can vary in size but are generally large. Use pages to describe components of the segments makes segments easy to manage and can swap memory between segments. need to allocate page table entries only for those pieces of the segments that have themselves been allocated. Segments that are shared can be represented with shared page tables for the segments themselves.

An Early Example -- IBM System 370 24 bit virtual address Real Memory 4 bits 8 bits 12 bits simple bit operation + Segment Table Page Table

Lookups Each memory reference can be 3 assuming no fault Can exploit locality to improve lookup strategy a process is likely to use only a few pages at a time Use Translation Lookaside buffer to exploit locality a TLB is a fast associative memory that keeps track of recent translations. The hardware searches the TLB on a memory reference On a TLB miss, either a hardware or software exception can occur older machines reloaded the TLB in hardware newer RISC machines tend to use software loaded TLBs can have any structure you want for the page table fast handler computes and goes. Eg, the MIPS.

A TLB A small fully associative cache Each entry contains a tag and a value. tags are virtual page numbers values are physical page table entries. Problems include keeping the TLB consistent with the PTE in main memory valid and ref bits, for example keeping TLBs consistent on an MP. quickly loading the TLB on a miss. Hit rates are important. Tag Value 0xfff1000 0x12341111 0xa10100 ? 0xbbbb00 0x1111aa11 0xfff1000

MIPS R3000 VM Architecture Three kernel regions Virtual memory Physical memory User mode and kernel mode 2GB of user space When in user mode, can only access KUSEG. Three kernel regions KSEG0 contains kernel code and data, but is unmapped. Translations are direct. KSEG1 like KSEG0, but uncached. Used for I/O space. KSEG2 is kernel space, but cached and mapped. Contains page tables for KUSEG. Implication is that the page tables are kept in VIRTUAL memory! fffffffff ffffffff Kernel mapped UnCached KSEG2 3684 MB KSEG1 Kernel Unmapped UnCached bffffffff 9ffffffff Kernel Unmapped Cached KSEG0 7ffffffff KUSEG User Mapped Cacheable 1ffffffff 512 MB 00000000 00000000

Software Loaded TLB The MIPS TLB contains 64 entries fully associative random replacement On a TLB miss to KUSEG, a trap occurs control transfers to UTLBMISS handler If a miss occurs in the miss handler, a second fault occurs. control transfers to a KTLBMISS handler the missed VA is “pte” the translation for the page table is loaded into the TLB. UTLBmiss(vm_offset_t va) { PTE_t pte; pte = PCB.PTBR; pte = pte + (va / PAGE_SIZE); TLB_dropin(va, *pte); /* could miss here */ }

Selecting a page size Small pages give you lots of flexibility but at a high cost. Big pages are easy to manage, but not very flexible. Issues include TLB coverage product of page size and # entries internal fragmentation likely to use less of a big page # page faults and prefetch effect small pages will force you to fault often match to I/O bandwidth want one miss to bring in a lot of data since it will take a long time.