Virtual Memory: the Page Table and Page Swapping

Slides:



Advertisements
Similar presentations
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Advertisements

Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
Virtual Memory. Why do we need VM? Program address space: 0 – 2^32 bytes –4GB of space Physical memory available –256MB or so Multiprogramming systems.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 24 Instructor: L.N. Bhuyan
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
IT253: Computer Organization
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
The Three C’s of Misses 7.5 Compulsory Misses The first time a memory location is accessed, it is always a miss Also known as cold-start misses Only way.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
3/1/2002CSE Virtual Memory Virtual Memory CPU On-chip cache Off-chip cache DRAM memory Disk memory Note: Some of the material in this lecture are.
Memory – Virtual Memory, Virtual Machines
Virtual Memory: Implementing Paging
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Virtual Memory Chapter 7.4.
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
A Real Problem What if you wanted to run a program that needs more memory than you have? September 11, 2018.
From Address Translation to Demand Paging
CS703 - Advanced Operating Systems
Memory: Putting it all together
Today How was the midterm review? Lab4 due today.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Memory Hierarchy Virtual Memory, Address Translation
Morgan Kaufmann Publishers
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 2018
CSCI206 - Computer Organization & Programming
Virtual Memory 3 Hakim Weatherspoon CS 3410, Spring 2011
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Virtual Memory 4 classes to go! Today: Virtual Memory.
ECE 445 – Computer Organization
Virtual Memory Hardware
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Translation Buffers (TLB’s)
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Paging and Segmentation
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
CSE 153 Design of Operating Systems Winter 2019
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory.
Cache writes and examples
Review What are the advantages/disadvantages of pages versus segments?
Presentation transcript:

Virtual Memory: the Page Table and Page Swapping CS/COE 1541 (term 2174) Jarrett Billingsley

Class Announcements How was “break”? HW4 will be out Wednesday. Thought I’d push it back cause of the project pushback. You still have until tomorrow night to turn in Project 1. …for a 20% late penalty. Going to re-cover some PT and TLB stuff from the last lecture. That was a pretty anemic presentation. 3/13/2017 CS/COE 1541 term 2174

Just to recap… Cache Term VMem Term Block Page Miss Page fault VMem: virtual memory PMem: physical memory VM: virtual machine VA: virtual address PA: physical address PT: page table VMem goals: Protection Collaborative execution Relocation Also, to present larger memory space than physically available – but not as much of a focus these days. Cache Term VMem Term Block Page Miss Page fault Code Memory Process 1 0x8000 0xFFFF Process 2 3/13/2017 CS/COE 1541 term 2174

The Page Table 3/13/2017 CS/COE 1541 term 2174

The yellow pages of memory The page table (PT) is a “directory” which maps from VAs to PAs. It’s indexed by the VA page number. Each page table entry (PTE) has some familiar-looking info… The valid bit says whether or not this is a valid virtual page. The dirty bit says whether or not it has been written to. The reference bit says whether the data in this page has been read or written recently (more on this later). Finally, there’s the PA page mapping. VA page# V D R PA page# - 1 2 0x852F 3 4 0x24E 5 0x16244 … ... 3/13/2017 CS/COE 1541 term 2174

Hey, don’t touch that! Okay, there’s more to it than that… Each PTE also has protection. This says if a page can be: Read Written eXecuted This is important not only for program stability, but to avoid attacks by malware! If you make the stack non- executable, that makes it much harder for malware to exploit your programs. V D R PA page# - 1 2 0x852F 3 4 0x24E 5 0x16244 … ... Prot - RX R RW ... 3/13/2017 CS/COE 1541 term 2174

Whose pages? Given that each process has its own VMem space, what would happen if we used one PT? Each process needs its own PT. We solve this by having a PTR (page table register) to point to the current PT. Code Memory Process 1 0x8000 0xFFFF Process 2 Process 1 V D R PA page# P … ... 0x8000 1 0x590B RX Process 2 V D R PA page# P … ... 0x8000 1 0x6AD0 RX 3/13/2017 CS/COE 1541 term 2174

How big is the PT? 32-bit addresses with 4KiB (212 B) pages means 220 (1M) PTEs. 64-bit addresses with 4KiB pages means 252 (4 quadrillion) PTEs. We can use hierarchical page tables as a sparse data structure. Address 10 bits (“directory”) 10 bits (“table”) 12 bits (“offset”) 00 0010 11 10 0011 00 0010 0001 0000 index... index... PA! V D R Table Addr … ... 1 0004C000 V D R Page Addr P … ... 1 03BFA000 RX PTR 3/13/2017 CS/COE 1541 term 2174

How many damn memory accesses do we need??? How it works Let’s do lw $t0, 16($s0) Since the cache uses PAs… PA! V D R Table Addr … ... 1 0004C000 V D R Page Addr P … ... 1 03BFA000 RX PTR Cache miss… hit! CPU How many damn memory accesses do we need??? 3/13/2017 CS/COE 1541 term 2174

The TLB

It just never ends Why not have a cache dedicated to PTEs? We call this the translation lookaside buffer (TLB). I don’t know why. Each block contains 1 or more PTEs. On a hit… Hey, instant VA -> PA translation! On a miss… Uh oh. Is it just that the PTE we need isn’t cached (a TLB miss)? Or is it that it’s a page fault – an invalid address? Which one do you think is more likely? VA Page V D R Page# P … 00008 1 03BFA RX 3/13/2017 CS/COE 1541 term 2174

TLB Performance We can’t even access L1 cache without doing translation. So how fast does the TLB have to be? Fast. TLBs have to hit in a single cycle or less. Therefore: The TLB is small (hundreds of blocks). Set associativity to reduce miss rate without making it too slow. Random replacement for speed (no time for LRU!). Write-back for less bus traffic. Only the valid/dirty/ref bits have to be written anyway! 3/13/2017 CS/COE 1541 term 2174

Switching processes When you switch from Process 1 to Process 2… are the TLB and cache entries valid anymore? NOPE. We could flush (invalidate everything) the TLB and cache, but process switches happen a lot. Instead what we do is add a process identifier to each cache/TLB entry, and another register to hold the current process. That way we can keep the cache/TLB full, and only the currently-running process’s entries will be treated as valid. Isn’t this fun? ;D 3/13/2017 CS/COE 1541 term 2174

Page Swapping 3/13/2017 CS/COE 1541 term 2174

Capping again… Page Swapping, or paging, means putting memory pages into nonvolatile storage. That is, we’re using RAM as a disk cache. It lets you: Pretend you have more physical memory. Run more programs at once. Hibernate the system (make a “save state”). Randomly access very large files. But mechanical hard drives have high latency, though pretty good bandwidth. This leads to large (>4KB) pages, fully-associative mapping, LRU replacement with write-back, and write buffers. 3/13/2017 CS/COE 1541 term 2174

Page Faults aren’t always an error In VMem, a page fault means that the program is performing an invalid access. Either it’s using an invalid memory address… Or it’s trying to do something illegal when using a valid address. e.g. trying to execute a non-executable page. But page faults are handled by the OS, in software. So we can do whatever we want on page faults! 3/13/2017 CS/COE 1541 term 2174

A cache, stomping on your face, forever. We can treat invalid PTEs specially: we can say that they point to a block on the disk instead of memory. VA page# V D R PA page# P … ... 0x7FFE <3354> RW 0x7FFF <9218> 0x8000 1 0x590B RX Memory 3/13/2017 CS/COE 1541 term 2174

Handling page faults When a page fault occurs, the OS can inspect the faulting process’s PT to see if it needs to “page in” a block from the disk. And just like with the cache… We may have to evict a physical memory page first. And if the page is dirty, we have to write it to disk. The OS reserves a part of the disk as swap space – a special area used only for storing blocks from memory. So page faults are slow. Usually the OS will put your process on hold while handling the fault, and come back to it when the transfer is complete. 3/13/2017 CS/COE 1541 term 2174

Hibernation and Large File Access This leads very naturally to two useful concepts: Hibernation is performed by simply paging every page to the swap space, and then shutting down. Then when the OS boots again, it pages them all back in, and resumes exactly where it left off. Large file access can be performed by “mapping” a file into a process’s virtual address space. Then the process accesses the file by accessing memory. When a page fault occurs, the OS handles bringing in the page! The OS can also evict pages of the file that you haven’t used. 3/13/2017 CS/COE 1541 term 2174