Misc Exercise 2 updated with Part III.  Due on next Tuesday 12:30pm. Project 2 (Suggestion)  Write a small test for each call.  Start from file system.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
Paging 1 CS502 Spring 2006 Paging CS-502 Operating Systems.
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Paging Hardware With TLB
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
CAS3SH3 Final Review. The Final Tue 28 th, 7pm, IWC3 closed book, closed note Non-comprehensive: memory management, storage & file system Types of questions:
Memory Management (II)
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Memory Management.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 8: Memory Management Dr. Mohamed Hefeeda.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Chapter 8: Main Memory.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
CS212: OPERATING SYSTEM Lecture 5: Memory Management Strategies 1 Computer Science Department.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Implementation of Page Table Page table is kept in main memory Page-table base.
Chapter 4 Memory Management Virtual Memory.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Address Translation. Recall from Last Time… Virtual addresses Physical addresses Translation table Data reads or writes (untranslated) Translation tables.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
Page Table Implementation. Readings r Silbershatz et al:
Address Translation Mark Stanovich Operating Systems COP 4610.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Week 5:Virtual Memory CS 162. Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
COMP 3500 Introduction to Operating Systems Paging: Translation Look-aside Buffers (TLB) Dr. Xiao Qin Auburn University
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
W4118 Operating Systems Instructor: Junfeng Yang.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Non Contiguous Memory Allocation
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Paging and Segmentation
Operating System Concepts
CS162 Operating Systems and Systems Programming Lecture 12 Address Translation March 10, 2008 Prof. Anthony D. Joseph
EECE.4810/EECE.5730 Operating Systems
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Virtual Memory Hardware
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Chapter 8: Memory Management strategies
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Lecture 7: Flexible Address Translation
CSE 542: Operating Systems
CS162 Operating Systems and Systems Programming Lecture 12 Address Translation October 10th, 2016 Prof. Anthony D. Joseph
Presentation transcript:

Misc Exercise 2 updated with Part III.  Due on next Tuesday 12:30pm. Project 2 (Suggestion)  Write a small test for each call.  Start from file system calls with simple Linux translation.  Or start from Exec() Midterm (May 7)  Close book. Bring 3 pages of double- sided notes.  Next Monday: Project 2 or review?

More on Address Translation CS170 Fall T. Yang Based on Slides from John Kubiatowicz

Implementation Options for Page Table Page sharing among process What can page table entries be utilized? Page table implementation  One-level page table  Muti-level paging  Inverted page tables

Shared Pages through Paging Shared code  One copy of read-only code shared among processes (i.e., text editors, compilers, window systems).  Shared code must appear in same location in the logical address space of all processes Private code and data  Each process keeps a separate copy of the code and data  The pages for the private code and data can appear anywhere in the logical address space

Shared Pages Example

PageTablePtrB page #0 page #1 page #2 page #3 page #5 V,R N V,R,W N page #4 V,R V,R,W page #4 V,R Example Offset Virtual Page # Virtual Address (Process A): PageTablePtrA page #0 page #1 page #3 page #4 page #5 V,R page #2 V,R,W N Offset Virtual Page # Virtual Address (Process B): Shared Page This physical page appears in address space of both processes page #2 V,R,W

Optimization of Unix System Call Fork() A child process copies address space of parent.  Most of time it is wasted as the child performs exec().  Can we avoid doing copying on a fork()? PageTablePtrB page #0 page #1 page #2 page #3 page #5 V,R N V,R,W N page #4 V,R V,R,W page #4 V,R Offset Virtual Page # Virtual Address (Process A): PageTablePtrA page #0 page #1 page #3 page #4 page #5 V,R page #2 V,R,W N Offset Virtual Page # Virtual Address (Child proc B): Parent address space page #2 V,R,W Child address space

Unix fork() optimization PageTablePtr page #0 page #1 page #2 page #3 page #5 V,R N V,R,W N page #4 V,R V,R,W page #4 V,R Offset Virtual Page # Virtual Address (Process A): PageTablePtrA page #0 page #1 page #3 page #4 page #5 V,R page #2 V,R,W N Offset Virtual Page # Virtual Address (Child proc): Parent address space page #2 V,R,W Child address space

Unix fork() optimization PageTablePtr page #0 page #1 page #2 page #3 page #5 V,R N V,R,W N page #4 V,R V,R,W page #4 V,R Offset Virtual Page # Virtual Address (Process A): PageTablePtrA page #0 page #1 page #3 page #4 page #5 V,R page #2 V,R,W N Offset Virtual Page # Virtual Address (Child proc): Parent address space page #2 V,R,W Child address space Child

Copy-on-Write: Lazy copy during process creation COW allows both parent and child processes to initially share the same pages in memory. A shared page is duplicated only when modified COW allows more efficient process creation as only modified pages are copied

Copy on Write: After Process 1 Modifies Page C How to memorize a page is shared? When to detect the need for duplication? Need a page table entry bit Physical page number Page table entry

More examples of utilizing page table entries How do we use the PTE?  Invalid PTE can imply different things: –Region of address space is actually invalid or –Page/directory is just somewhere else than memory  Validity checked first –OS can use other bits for location info Usage Example: Copy on Write  Indicate a page is shared with a parent Usage Example: Demand Paging  Keep only active pages in memory  Place others on disk and mark their PTEs invalid Physical page number Page table entry

Example: Intel x86 architecture PTE  Address format (10, 10, 12-bit offset)  Intermediate page tables called “Directories” P: Present (same as “valid” bit in other architectures) W: Writeable U: User accessible PWT:Page write transparent: external cache write-through PCD:Page cache disabled (page cannot be cached) A: Accessed: page has been accessed recently D: Dirty (PTE only): page has been modified recently L: L=1  4MB page (directory only). Bottom 22 bits of virtual address serve as offset Page Frame Number (Physical Page Number) Free (OS) 0LDA PCDPWT UWP

More examples of utilizing page table entries Usage Example: Zero Fill On Demand  Security and performance advantages –New pages carry no information  Give new pages to a process initially with PTEs marked as invalid. –During access time, page fault  physical frames are allocated and filled with zeros  Often, OS creates zeroed pages in background Can a process modify its own translation tables?  NO!  If it could, could get access to all of physical memory  Has to be restricted

Implementation Options for Page Table Page sharing among process What can page table entries be utilized? Page table implementation  One-level page table  Muti-level paging  Inverted page tables

One-Level Page Table What is the maximum size of logical space? Mapping of pages Constraint: Each page table needs to fit into a physical memory page! Why? A page table needs consecutive space. Memory allocated to a process is a sparse set of nonconsecutive pages Maximum size = # entry * page size

One-level page table cannot handle large space Example:  32 -bit address space with 4KB per page.  Page table would contain 2 32 / 2 12 = 1 million entries. –4 bytes per entry Need a 4MB page table with contiguous space. –Is there 4MB contiguous space for each process? Maximum size with 4KB per page #entry= 4KB/4B = 1K. Maximum logical space=1K*4KB= 4MB.

One-level Page Table: Advantage/Disadvantage Pros  Simple memory allocation  Easy to Share Con: What if address space is sparse?  E.g. on UNIX, code starts at 0, stack starts at ( ).  Cannot handle a large virtual address space Con: What if table really big?  Not all pages used all the time  would be nice to have working set of page table in memory How about combining paging and segmentation?  Segments with pages inside them?  Need some sort of multi-level translation

Two-Level Page-Table Scheme Level 1 Level 2

Address-Translation Scheme Level 1 Level 2 p 1 is an index into the outer level-1 page table, p 2 is the displacement within the page of the level-2 inner page table Level-2 page table gives the final physical page ID

Physical Address: Offset Physical Page # 4KB Flow of the two-level page table 10 bits 12 bits Virtual Address: Offset Virtual P2 index Virtual P1 index 4 bytes PageTablePtr Tree of Page Tables Tables fixed size (1024 entries)  On context-switch: save single PageTablePtr register Valid bits on Page Table Entries  Don’t need every 2 nd -level table  Even when exist, 2 nd -level tables can reside on disk if not in use 4 bytes

stack Summary: Two-Level Paging 1111 stack heap code data Virtual memory view page1 #offset Physical memory view data code heap stack page2 # null 101 null null null null Page Tables (level 2) Page Table (level 1)

stack Summary: Two-Level Paging stack heap code data Virtual memory view (0x90) Physical memory view data code heap stack (0x80) null 101 null null null null Page Tables (level 2) Page Table (level 1) In best case, total size of page tables ≈ number of pages used by program virtual memory. Requires two additional memory access!

Constraint of paging Bits of d = log (page size) Bits of p1 >= log (# entries in level-1 table) Bits of p2 >= log (# entries in level-2 table) Physical page number is limited by entry size of level-2 table. logical space size = # entry in level-1 table * # entry in level 2 table * page size Level 1 Level 2 Physical page

Analysis of a Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into:  a page number consisting of 20 bits  a page offset consisting of 12 bits Each entry uses 4 bytes How to build a two-level paging scheme?  How many entries can a single-page table hold?  What are p 1, p 2 ? page number page offset pipi p2p2 d ? ? 12

Analysis of a Two-Level Paging Example A 1-page table with 4KB contains 1K entries and each uses 4B. 1K entries require 10 bits for P 1 and P 2 offset The page number is further divided into:  a 10-bit level-1 index  a 10-bit level-2 index page number page offset pipi p2p2 d What if we use 2 bytes for each table entry? Increased logical space size? Increased physical space size?

Example of maximum logical space size Maximum logical space size # entry in level-1 page table * # entry in level-2 page table * page size = 1K * 1K * 4KB = 2 32 bytes = 4GB page number page offset pipi p2p2 d K 4K

An example of three-level paging in a 64-bit address space What is the maximum logical and physical space size?

Three-level Paging in Linux Address is divided into 4 parts

Hashed Page Tables Common in address spaces > 32 bits Size of page table grows proportionally as large as amount of virtual memory allocated to processes Use hash table to limit the cost of search  to one — or at most a few — page-table entries  One hash table per process  This page table contains a chain of elements hashing to the same location Use this hash table to find the physical page of each logical page  If a match is found, the corresponding physical frame is extracted

Hashed Page Table

Inverted Page Table One hash table for all processes  One entry for each real page of memory  Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs

Inverted Page Table Architecture

Address Translation Comparison AdvantagesDisadvantages SegmentationFast context switching: Segment mapping maintained by CPU External fragmentation Paging (single-level page) No external fragmentation, fast easy allocation Large table size ~ virtual memory Internal fragmentation Paged segmentation Table size ~ # of pages in virtual memory, fast easy allocation Multiple memory references per page access Two-level pages Inverted TableTable size ~ # of pages in physical memory Hash function more complex

Summary Page Tables  Memory divided into fixed-sized chunks of memory  Virtual page number from virtual address mapped through page table to physical page number  Offset of virtual address same as physical address  Large page tables can be placed into virtual memory Usage of page table entries  Page sharing.  Copy on write  Pages on demand  Zero fill on demand Multi-Level Tables  Virtual address mapped to series of tables  Permit sparse population of address space Inverted page table  Size of page table related to physical memory size