Practical Session 9, Memory

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

Chapter 5 : Memory Management
Chapter 4 Memory Management Basic memory management Swapping
Project 5: Virtual Memory
Multilevel Page Tables
Memory.
Page Table Implementation
1 Overview Assignment 4: hints Memory management Assignment 3: solution.
SE-292: High Performance Computing
Cache and Virtual Memory Replacement Algorithms
Module 10: Virtual Memory
Chapter 10: Virtual Memory
Memory Management.
CMPT 300 Introduction to Operating Systems
Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
SE-292 High Performance Computing
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CE6105 Linux 作業系統 Linux Operating System 許 富 皓. Chapter 2 Memory Addressing.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Chapter 3.2 : Virtual Memory
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Operating Systems, 132 Practical Session 9, Memory 1.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
1 Linux Operating System 許 富 皓. 2 Memory Addressing.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Virtual Memory 1 1.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Demand Paging Reference Reference on UNIX memory management
Paging Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable problem.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Operating Systems Practical Session 8, Memory 1. Quick recap - Swapping Swapping basically means we bring the entire process to memory, use it, and possibly.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Operating Systems, Winter Semester 2011 Practical Session 9, Memory 1.
Memory: Page Table Structure
Practical Session 7, Memory
Virtual Memory Chapter 7.4.
Virtual Memory - Part II
Practical Session 9, Memory
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Practical Session 9, Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Contents Memory types & memory hierarchy Virtual memory (VM)
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Paging and Segmentation
4.3 Virtual Memory.
CSE 542: Operating Systems
Virtual Memory 1 1.
Presentation transcript:

Practical Session 9, Memory Operating Systems, 112 Practical Session 9, Memory

Quick recap - Swapping Swapping basically means we bring the entire process to memory, use it, and possibly put it back to our store (e.g. large enough disk). Swap time proportional to the amount of swapped memory – a heavy operation. Creates holes in memory. Hardly used anymore…

Quick recap – Paging and Virtual Memory What happens when the process’s memory requirements are too large to fit into the physical memory? Only part of the program reside in the memory while the rest stays in our store. This means we can support an address space which is independent of physical memory (on a 32 bit machine, there are 232 addresses in virtual memory). Paging – divide physical memory into fixed size blocks.

Quick recap – Paging and Virtual Memory The virtual address space is divided into pages. The corresponding units on physical memory are called page frames or frames. Memory is managed with the aid of the MMU. The mapping of pages to page frames can be too large to effectively answer our demands. Solution: use a two level page system.

Quick recap – TLB Translation Look aside Buffer (associative memory) is a small table residing in the MMU. Each entry contains information about one page. The TLB maps virtual pages to a physical address without going through the page table. Traditionally implemented in hardware (lookup of entries is done in a single step). When a miss occurs, an ordinary page table lookup is done (and the TLB is updated).

Quick recap – Inverted Page Table The IPT uses one entry per frame (physical memory), instead of per page (virtual memory). Virtual to physical translation may become significantly harder: when process n references virtual page p we now have to go over the entire IPT for an entry (n,p) – this must be done on every memory reference! Tradeoff: amount of memory required for the page table vs. time required to search for a page. Often used with a hash function to speed up search.

Question 1 Assume a 32 bit system, with 2-level page table (page size is 4K, |p1|=|p2|=10bits, |offset|=12bits). Program “A” on this system requires 12 MB of memory. The bottom 4MB of memory for program text, followed by 4MB for data and lastly, the top 4MB for stack. How many page tables are actually required for this process. Describe the lookup within the page tables of address 0x00403004.

Question 1 We use the following scheme: The 12 least significant digits in this address, allow access for 212 bytes – 4 Kb. These are pointed to by any of the 210 entries of p2. In total, a second level page table can point to 222 bytes – 4 MB. Each such page table is pointed to by a first level table entry. In our case – we require 4 page tables: a single first level page table, which points to 3 second level page tables. page offset p1 p2 d 10 12

Question 1 1023 1 2 3 4 Top-level page table 1023 4095 page offset page number p1 p2 d 4 4 10 10 12 3 3 2 2 1 1 32 bit virtual address, 4K pages, lookup of 0x00403004 (4,206,596(dec)) Binary: 0000000001 = 1(dec) 0000000011 = 3(dec) 000000000100 = 4(dec) 4 – 8 MB 12288 – 16383 Byte

Question 2 Consider a paged memory system with two-level page table. If a memory reference takes 20 nanoseconds (ns), how long does a paged memory reference take? Assume that the second-level page table is always in memory, and: There is no TLB, and the needed page is in main memory. There is a TLB, with access speed of 0.05 ns, the needed page is in main memory and The TLB does not contain information on this page. The TLB contains information on this page.

Question 2 We will need to access the memory thrice: in the first and second accesses we will get the first and second level page table entry. This will point us to the physical address we will access next. Total time: 3x20 = 60ns. Remember we first access the TLB: Since this entry is not located in the TLB, after examining it, we will revert to the regular lookup scheme (as before). Total time: 0.05+3x20 = 60.05ns. If the entry is in the TLB, after examining it we will know the location of the exact page frame and access it directly. Total time: 0.05+20 = 20.05ns. Note that the use of virtual memory may significantly reduce memory performance. The TLB provides a mechanism to overcome this problem.

Question 3 Consider a computer with an address space of 32 bits, and a 2K page size. What is the maximal size of the page table (single level), assuming each entry is 4 bytes? What is the maximal size of a program’s memory? Does it depends on the size of the pages? Assume a two level page table, in which 8 bits are dedicated for the first level table. What is the maximal size of the 2nd table? Can we run larger programs now? Assume that first level table is always in memory, page fault happens in 4% of the cases, the second level table is not in memory and page fault occurs in 1% of the cases. Calculate the average access time to a page, if disk access time is 30x10-6sec, and memory access time is 100x10-9sec.

Question 3 The virtual memory size is 232 bytes, and the size of each page is 2K (211 bytes). Total number of pages is therefore 232/211=221 pages. Since each entry is 4 bytes long, we require 4x221 = 223 bytes = 8 MB to hold this table. Maximal program size is 4 GB (size of virtual memory), regardless of the page size.

Question 3 Using 8 bits for the first level page table, leaves us with 213 bits for the second level page table. The size of the second table is 4x213 = 32Kb. The size of the virtual memory stays the same, and we can’t run bigger programs.

Question 3 Break this to three cases:

Question 4 What is the minimal size of a single level page table on a 32 bit machine, with 4KB pages? What is the size of a 64 bit machine’s page table with 4KB pages? How many layers will we need if we want to ensure that each page table will require only a single page? What is the size of the inverted page table, for a computer with 2GB RAM, in which each page is 16KB long and each entry is 8 bytes long?

Question 4 If the address space consists of 232 bytes, with 4KB pages, we have 232/212=220 entries (over 1 million). Using 4 bytes per entry, we get a 4MB page table. With a 64 bit machine, we need 252 entries. Each entry being 8 bytes long results in a more than 30 PetaByes (Peta > Tera > Giga) page table. Limiting page table size to pages, means that we can only use 212/23=29 entries in each table, leading to 52/9≈6 layers. That means 6 memory accesses for each address translation.

Question 4 3. IPT has an entry per frame. That means that we must first divide the physical memory to frames, so we will know the amount of entries: 2GB = 231 bytes 231/214 = 217 entries Each entry is 8 bytes long, so the total size is: 217x23 = 220 bytes = 1MB

Question 5.1, Moed Beit 2010 A given operating system manages memory with the aid of page tables. It is currently executing two programs, A and B (assume that these do not call exec()). Answer the following questions: If B was created as a result of the fork() system call made by A, can the system use the same page table for both programs? Assume that both A and B were created from process C. can the system use the same page table for both programs in this case? Now assume that the system is using segmentation with paging and that B was created as a result of the fork() system call made by A. Can the system use the same page table for both programs in at least one segment?

Question 5.1 No. Despite sharing much in common the two programs may execute different code (e.g. allocate memory differently) after the call to fork() is made. No. Although both execute the same code, progress (in terms of code line executed) may be different. As a result, one process can potentially swap out pages required by the other. Yes. Since the code segment is the same for both (no exec calls are allowed), this segment can be maintained in the same page table.

Question 5.2 The time required to read an entry from the TLB in a given system is 10 nsec. Access time to a single level page table entry is a 10 times slower, and requires 100 nsec. What should be the TLB hit rate so that the average time to find a virtual address will be 30 nsec? Explain your calculation

Question 5.2 The TLB hit rate provides a measure of successful TLB hits when seeking a virtual address. When a successful TLB hit occurs the virtual address is translated directly from the TLB. In contrast, when the page is not in the TLB one has to access the page table. Let p denote the TLB hit rate. We know that: p∙10 + (1-p) ∙(10+100) = 30 Thus, the TLB hit rate should be: p=0.8 Note: with a high TLB hit rate memory access time is 70% faster!

Question 5.3 Assume that the TLB includes the following entries: valid bit, modified bit, protection code, virtual address and frame number. Can a single CPU machine which supports multiple processes use a single such TLB without changing it? If it can, explain how this is done otherwise explain why it can’t be done. Is the same also true for a multi CPU machine? Explain.

Question 5.3 A single CPU machine can use a single TLB (in fact this was the common setup in the days preceding the multi core CPUs). The important thing to remember is that whenever a new process is running the valid bits of all entries should be marked with a 0 (no frames in cache). This is not true when there are multiple CPUs (or cores). When multiple processes are running both may require the same virtual address which should be translated into two distinct addresses. One means to overcome this problem is by introducing the PID of processes as another entry of the TLB.

Assignment 3 xv6 memory overview

xv6 memory overview Virtual memory in xv6 is managed without a backing store. This means that there is no swapping mechanism and that all programs are maintained in the physical memory, at all times. xv6 sets up the segmentation hardware so that virtual and linear addresses are always the same (i.e., segmentation doesn't do anything). Each page is of size 212 = 4096 bytes A user process page table includes much of the kernel’s mapping. However, the page protection bits prevent the user from using anything other than its memory.

xv6 memory: virtual to physical translation overview 220 | 212 Virtual ~ Linear address: 220 (entries) PTE 212 (offset) flags 220 physical offset 220 bits Page table 220 entries (PTEs) Physical address

xv6 memory: A process’s 2 page table structure va: 10 bits 10 bits 12 bits PDX(va) PTX(va) Page Directory: [4096 bytes] (Offset) 1024 1024 entries PTE_ADDR(X) 1024 Page addr. flags Entry size: 32 bit 1024 “page table pages” [4096 bytes]

Assignment 3 You will add a basic backing store To keep things simple you will add to xv6 the ability to swap processes When free memory descends below a pre-defined threshold process will be swapped-out according to several policies Processes are swapped-in on demand

Assignment 3 - Architecture OS Physical memory DISK P1 Swapper P2 P3 Scheduler P4 The scheduler wishes to run process 2, but memory is still low. The OS swaps process 4 out and process 2 in. As the system works processes are created and free memory is decreasing The OS then decides to swap a process to disk

Assignment 3 - Architecture Writing to files from within kernel, even though possible is not recommended. Nevertheless, we will do exactly that. Using an additional process (the swapper) that is responsible for disk operations since the scheduler itself should not access the disk The scheduler and swapper must coordinate their operations