COMP5102/5122 Lecture 4 Operating Systems (OS) Memory Management phones off (please)

Slides:



Advertisements
Similar presentations
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Advertisements

Part IV: Memory Management
Memory Management Chapter 7.
Fixed/Variable Partitioning
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Allocating Memory.
Chapter 7 Memory Management
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
Chapter 7 Memory Management
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
CSCI2413 Lecture 5 Operating Systems Memory Management 1 phones off (please)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11: Memory Management
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management Chapter 7 B.Ramamurthy. Memory Management Subdividing memory to accommodate multiple processes Memory needs to allocated efficiently.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management 2010.
Virtual Memory Chapter 8.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Memory Management Chapter 5.
Computer Organization and Architecture
CSCI2413 Lecture 6 Operating Systems Memory Management 2 phones off (please)
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 7 Memory Management
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Memory Management Chapter 7.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Dr.
Chapter 7 Memory Management
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Memory Management Chapter 7.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Subject: Operating System.
CE Operating Systems Lecture 14 Memory management.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Virtual Memory Pranav Shah CS147 - Sin Min Lee. Concept of Virtual Memory Purpose of Virtual Memory - to use hard disk as an extension of RAM. Personal.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
2010INT Operating Systems, School of Information Technology, Griffith University – Gold Coast Copyright © William Stallings /2 Memory Management.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Memory Management Chapter 7.
Chapter 7 Memory Management
Memory Management Chapter 7.
ITEC 202 Operating Systems
ITEC 202 Operating Systems
Main Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
CSE 542: Operating Systems
Presentation transcript:

COMP5102/5122 Lecture 4 Operating Systems (OS) Memory Management phones off (please)

© De Montfort University, 2005Comp5102/ L42 Lecture Outline Memory Management –Requirements Partitioning –Fixed, dynamic Placement Algorithms –First-fit,Best-fit,Worst-fit Swapping Virtual Memory –Paging, Segmentation

© De Montfort University, 2005Comp5102/ L43 Memory Management Process scheduling defines when a process should be run. Memory Management defines where a process should be stored The problem ! No matter how much memory we install, some programmer will always want to use more!! How do we get a quart into a pint pot ??

© De Montfort University, 2005Comp5102/ L44 Memory Management … Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as many processes into memory as possible should be able to run a program whose size is larger than the available real memory

© De Montfort University, 2005Comp5102/ L45 Requirements Programmer does not know where the program will be placed in memory when it is executed While the program is executing, it may be swapped to disk and returned to main memory at a different location ( relocated ) Memory references in the code must be translated to actual physical memory addresses

© De Montfort University, 2005Comp5102/ L46 Fixed Partitioning Main memory is divided into equal- (or unequal-) sized partitions and these partitions are assigned to processes. Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation.

© De Montfort University, 2005Comp5102/ L47 Dynamic Partitioning Partitions are of variable length and number Process is allocated exactly as much memory as required Eventually get holes in the memory. This is called external fragmentation Must use compaction to shift processes so they are contiguous and all free memory is in one block

© De Montfort University, 2005Comp5102/ L48 Placement Algorithms Operating system must decide which free block to allocate to a process First-fit: scan the list segment and choose the block that is closest to the request. Best-fit: search the entire list and allocate the smallest hole that is big enough. Worst-fit: allocate the largest hole; must also search entire list. Produces the largest leftover hole.

© De Montfort University, 2005Comp5102/ L49 Example: A variable partition memory has the following hole sizes in memory order: 200K, 600K, 400K, 800K, 350K, 70K A new process of size 300K enters the system. Determine where it will go according to the best-fit, first-fit and worst-fit algorithms and update the hole size status of the memory after the process has been added.

© De Montfort University, 2005Comp5102/ L410 A Simple Model In the simplest memory management model –one process is in memory at a time that process is allowed to use as much memory as available user program 1 user program 2 operating system 0x0000 0x1000 0xFFFF

© De Montfort University, 2005Comp5102/ L411 Overlays Split the program into small blocks Keep in main only those blocks that are needed at any given time. Needed when process is larger than amount of memory allocated to it.

© De Montfort University, 2005Comp5102/ L412 Overlays … it is possible to increase the amount of memory available through the use of overlays main:initialisation input processing output initialisation main:initialisation input processing output main:initialisation input processing output input main:initialisation input processing output processing main:initialisation input processing output operating system 0x0000 0x1000 0xFFFF

© De Montfort University, 2005Comp5102/ L413 Swapping Moving processes to and fro between main memory and hard disk is called swapping Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher- priority process can be loaded and executed.

© De Montfort University, 2005Comp5102/ L414 Virtual Memory uses locality of reference in which most commonly used instructions and data are referenced: –instruction execution is localised either within loops or heavily used subroutines, –data manipulation is on local variables or upon tables or arrays of information. Virtual memory can also be defined as: A mapping from a virtual address space to a physical address space.

© De Montfort University, 2005Comp5102/ L415 0: 1: N-1: A System with Virtual Memory (paging) Address Translation: MMU converts virtual addresses to physical addresses via OS-managed lookup table (page table) CPU Memory 0: 1: P-1: Page Table Disk Virtual Addresses Physical Addresses

© De Montfort University, 2005Comp5102/ L416 Paging Partition memory into small equal-size chunks and divide each process into the same size chunks The chunks of a process are called pages and chunks of memory are called frames Operating system maintains a page table for each process –contains the frame location for each page in the process –each memory reference consist of a page number and offset within the page

© De Montfort University, 2005Comp5102/ L417 virtual page numberpage offset virtual address physical page numberpage offset physical address 0p–1 address translation pm–1 n–10p–1p Page offset bits don’t change as a result of translation VM Address Translation Parameters –P = 2 p = page size (bytes). –N = 2 n = Virtual address limit –M = 2 m = Physical address limit

© De Montfort University, 2005Comp5102/ L418 Paging … Each process has its own page table Each page table entry contains the frame number of the corresponding page in main memory A bit is needed to indicate whether the page is in main memory or not

© De Montfort University, 2005Comp5102/ L419 Page Tables Memory resident page table (physical page address) Physical Memory Disk Storage (swap blocks) Valid Virtual Page Number

© De Montfort University, 2005Comp5102/ L420 Execution of a Program Operating system brings into main memory a few blocks of the program An interrupt is generated when a block is needed that is not in main memory - page fault Operating system places the process in a blocking state and a DMA is started to get the page from disk An interrupt is issued when disk I/O is complete which causes the OS to place the affected process in the Ready Queue

© De Montfort University, 2005Comp5102/ L421 Thrashing Swapping out a piece of a process just before that piece is needed The processor spends most of its time swapping pieces rather than executing user instructions

© De Montfort University, 2005Comp5102/ L422 Replacement Policy Which page should be replaced? –Page that is least likely to be referenced in the near future Each page in memory usually has two bits associated with it: R referenced bit: set when page is accessed (read/written) M m odified bit: set when page is written to These bits would be set by MMU when a page is read/written and cleared by the OS - in particular the OS clears the R bits on every clock interrupt.

© De Montfort University, 2005Comp5102/ L423 When a page fault occurs there are four possible scenarios: R = 0, M = 0: not referenced, not modified R = 0, M = 1: not referenced, but modified R = 1, M = 0: referenced, but not modified R = 1, M = 1: referenced and modified

© De Montfort University, 2005Comp5102/ L424 Replacement algorithm NRU, Simple to operated and adequate performance. If R = 0 and M = 0 (not referenced, not modified) –swap one of these pages else if R = 0, M = 1 (not referenced, but modified) – swap one of these pages else if R = 1, M = 0 (referenced, but not modified) – swap one of these pages else swap one of the R = 1, M = 1 pages Note that any page with M = 0 can be overwritten since that have not been written to since being read from disk.

© De Montfort University, 2005Comp5102/ L425 Replacement algorithm … FIFO: Not very good - pages are kept in a list and the one at the end of the list is swapped out even if still in use. LRU: swap out or discard the page which has been least used (used by Atlas). Each page has a counter which is incremented by the MMU when the page is accessed. On a page fault swap page with lowest count.

© De Montfort University, 2005Comp5102/ L426 Replacement algorithms … NFU: Similar to LRU but implemented in software; the OS adds the R bit to the page counter on each clock interrupt (then clears the R bit). On a page fault swap page with lowest count. Find out about the problems with LRU and NFU.

© De Montfort University, 2005Comp5102/ L427 Segmentation OS creates multiple virtual address spaces, each starting at an arbitrary location and with arbitrary length. The start of a segment is virtual address 0. Each process can be assigned a different segment, independent of others. Relocation done at run time by the virtual memory mapping mechanism. Segmentation is implemented in a manner much like paging, through a lookup table. The difference is that each segment descriptor in the table contains the base address of the segment and a length.

© De Montfort University, 2005Comp5102/ L428 Segmentation with Paging Some operating systems allow for the combination of segmentation with paging. If the size of a segment exceeds the size of main memory, the segment may be divided into equal size pages.

© De Montfort University, 2005Comp5102/ L429 Segmentation with paging … The address consists of: segment number, page within the segment and offset within the page. The segment number is used to find the segment descriptor and the address within the segment is used to find the page frame and the offset within that page.

© De Montfort University, 2005Comp5102/ L430 Summary A number of processes can coexist in memory using a variety of allocation techniques and placement policies Main problem is the creation of unusable holes as processes terminate making it sometimes difficult to accommodate new processes Compaction can be used to shuffle processes and create larger more usable space but the large overheads involved make it not always feasible to carry out in real time. When swapping is used, the system can handle more processes than it has room for in memory … virtual memory concept.. and hence paging and segmentation When memory is full, a decision must be made as to which page or pages are to be replaces (replacement policy)

© De Montfort University, 2005Comp5102/ L431 Example Page Sizes

© De Montfort University, 2005Comp5102/ L432 Swap file A swap file (or swap space or, in Windows NT, a page) is a space on a hard disk used as the virtual memory extension of a computer's real memory (RAM). Having a swap file allows your computer's operating system to pretend that you have more RAM than you actually do. The least recently used files in RAM can be "swapped out" to your hard disk until they are needed later so that new files can be "swapped in" to RAM. In larger operating systems (such as IBM's OS/390), the units that are moved are called pages and the swapping is called paging. One advantage of a swap file is that it can be organized as a single contiguous space so that fewer I/O operations are required to read or write a complete file. In general, Windows and Unix-based operating systems provide a default swap file of a certain size that the user or a system administrator can usually change.

© De Montfort University, 2005Comp5102/ L433 thrashing Thrashing is computer activity that makes little or no progress, usually because memory or other resources have become exhausted or too limited to perform needed operations. When this happens, a pattern typically develops in which a request is made of the operating system by a process or program, the operating system tries to find resources by taking them from some other process, which in turn makes new requests that can't be satisfied. In a virtual storage system (an operating system that manages its logical storage or memory in units called pages), thrashing is a condition in which excessive paging operations are taking place. A system that is thrashing can be perceived as either a very slow system or one that has come to a halt.