Day 21 Virtual Memory.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Chapter 8 Virtual Memory
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Paging Hardware With TLB
Segmentation and Paging Considerations
Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice.
Virtual Memory Chapter 8.
Memory Management (II)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Problems Prof. Sin-Min Lee Department of Mathematics and Computer Sciences.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Memory Management 2010.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Chapter 3.2 : Virtual Memory
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory I Chapter 8.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
Computer Architecture Lecture 28 Fasih ur Rehman.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Fundamentals of Programming Languages-II Subject Code: Teaching SchemeExamination Scheme Theory: 1 Hr./WeekOnline Examination: 50 Marks Practical:
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Virtual Memory 1 1.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Operating Systems Unit 7: – Virtual Memory organization Operating Systems.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
Virtual Memory Pranav Shah CS147 - Sin Min Lee. Concept of Virtual Memory Purpose of Virtual Memory - to use hard disk as an extension of RAM. Personal.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Operating Systems Session 7: – Virtual Memory organization Operating Systems.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Virtual Memory Chapter 8.
Memory: Page Table Structure
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Day 22 Virtual Memory.
Day 20 Virtual Memory.
Chapter 8 Virtual Memory
Virtual Memory Chapter 8.
Paging and Segmentation
Operating System Concepts
Evolution in Memory Management Techniques
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Lecture 29: Virtual Memory-Address Translation
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Paging and Segmentation
Translation Buffers (TLBs)
Operating Systems: Internals and Design Principles, 6/E
Review What are the advantages/disadvantages of pages versus segments?
CSE 542: Operating Systems
Virtual Memory 1 1.
Presentation transcript:

Day 21 Virtual Memory

Two-level scheme to support large tables Consider a process as described here: Logical address space is 4-GiB (232) Size of a page is 4KiB (212 bytes) There are 220 pages in the process. (232/212) This implies we need 220 page table entries. If each page table entry occupies 4-bytes, then need 222 byte (4MiB) large page table The page table will occupy 222/212 i.e. 210 pages. Root table will consist of 210 entries – one for each page that holds a page table. Root table will occupy 212 bytes i.e 4KiB of space and will be kept in main memory permanently. Could require two disk accesses.

Always in main memory Brought into main memory when needed.

Inverted page table The page table can get very large An inverted page table has an entry for every frame in main memory and hence is of a fixed size. A hash function is used to map the page number to the frame number. An entry has a page number, process id, valid bit, modify bit, chain pointer, and so on.

Rehashing techniques for the inverted page table (Fig. 8.27) Hashing function: X mod 8 (b) Chained rehashing

Translation Look-aside Buffer(TLB) Used in conjunction with a page table Aim is to reduce references to the page table and hence reduce the number of memory accesses. (2 memory accesses for each fetch) TLB is a cache that holds a small portion of the page table. It’s a faster and smaller memory. Reduces the overall page access time. A TLB entry contains the page number and PTE. In tlb = 10ns (TLB) + 100ns (data not in TLB: 10ns (TLB) + 100 (root PT) + 100 (PT) + 100ns (data) Average = 110ns * .99 + (1-.99) * 310 = 112ns

During address translation: Check TLB. If TLB hit, use frame number with offset to generate address. Simultaneously access page table. If TLB hit, then stop. Else look at page table entry. If found, use frame number with offset to generate address. Update TLB. If page fault, then block process and issue a request to bring the page into main memory. When page is ready, update page table

TLB If we keep the right entries of the page table in the TLB, we can reduce the page table accesses and hence memory accesses. TLB will hold only some of the page table entries use associative mapping to find a page table entry. Search time is O(1).

Memory access time In tlb = 10ns (TLB) + 100ns (data) not in TLB: 10ns (TLB) + 100 (root PT) + 100 (PT) + 100ns (data) Average access time = 110ns (.99) + (1-.99) * 310ns = 112ns If 99% of the time, you have TLB hits, then average access time = 112ns.

Direct Mapping Associative Mapping

Page size – hardware/software decision Small page size Less internal fragmentation More pages in main memory Large page tables Few page faults Large page size More internal fragmentation Fewer pages per process Smaller page tables Fewer page faults Fewer processes in main memory

Page faults and page size e.g.: Small pages while(x < 30){ - Page 1 printTheValues(); - Page 5 readNewValues(); - Page 6 filterNewValues(); - Page 11 writeNewValues(); - Page 12 printTheValues(); - Page5 x++; - Page 1 } Since the pages are small, pages 1, 5, 6, 11 and 12 can all reside in main memory. Hence, fewer page faults.

e.g.: Medium sized pages while(x < 30){ - Page 1 printTheValues(); - Page 5 readNewValues(); - Page 3 filterNewValues(); - Page 4 writeNewValues(); - Page 5 printTheValues(); - Page5 x++; - Page 1 } Only pages 1,3 and 4 in main memory. So, bring in 5, but replace 1/3/4. Lots of page faults.

e.g.: Large pages while(x < 30){ - Page 1 printTheValues(); - Page 1 readNewValues(); - Page 1 filterNewValues(); - Page 2 writeNewValues(); - Page 2 x++; - Page 1 } Both pages 1 and 2 in main memory. Fewer page faults.

Page faults and number of frames per process

Variable page sizes are supported by many architectures. Operating systems typically support only one page size Makes replacement policy simpler Makes resident set management easier (how many pages per process etc)

VM with segmentation Advantages Growing data structures – OS can shrink or enlarge the segment as required. Allows parts of the process to be recompiled independently without recompiling the entire process. Easier to share. Easier for protection.

Segment table entry present bit starting address length of segment modify bit protection bit

Combined paging and segmentation Sharing and protection at the segment level. Replacement at the page level. Present bit, modified bit in the page-table entry. Linux – 3 level paging for user space, buddy for kernel space UNIX – paging for user space, dynamic allocation for kernel space