Requirements, Partitioning, paging, and segmentation

Slides:



Advertisements
Similar presentations
Memory Management Chapter 7.
Advertisements

Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Allocating Memory.
Chapter 7 Memory Management
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
OS Fall’02 Memory Management Operating Systems Fall 2002.
Chapter 7 Memory Management
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Chapter 7 B.Ramamurthy. Memory Management Subdividing memory to accommodate multiple processes Memory needs to allocated efficiently.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Chapter 7 Memory Management
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Memory Management Chapter 7.
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Dr.
Chapter 7 Memory Management
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Memory Management Chapter 7.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Subject: Operating System.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 8. 2 Memory Management n It is the task carried out by the OS and hardware to accommodate multiple processes in main memory.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory management Ref: Stallings G.Anuradha. What is memory management? The task of subdivision of user portion of memory to accommodate multiple processes.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management Chapter 5 Advanced Operating System.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
2010INT Operating Systems, School of Information Technology, Griffith University – Gold Coast Copyright © William Stallings /2 Memory Management.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Memory Management Chapter 7.
Chapter 7 Memory Management
Memory Management Chapter 7.
Memory Management.
ITEC 202 Operating Systems
Chapter 2 Memory and process management
Day 19 Memory Management.
Day 19 Memory Management.
CSNB334 Advanced Operating Systems 5. Memory Management
Background Information Text Chapter 8
Chapter 8 Main Memory.
Day 18 Memory Management.
Main Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Chapter 7.
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Operating System Chapter 7. Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Operating Systems: Internals and Design Principles, 6/E
Chapter 7 Memory Management
Presentation transcript:

Requirements, Partitioning, paging, and segmentation Memory Management Requirements, Partitioning, paging, and segmentation Point students to SharedMemory project in svn Briefly describe what the shared memory code does Return graded quiz, exam, etc.

Main Memory: The Big Picture kernel memory proc struct Data Stack Text (shared) kernel stack/u area Data Stack Text (shared) kernel stack/u area Data Stack Text (shared) kernel stack/u area Process table = array of proc structures The u or user area of the proc structure maintains control information about a process. The major fields in the u area include signal handlers and related information. See http://www.linuxjournal.com/article/6483

Memory Management Main memory holds Memory management The kernel Processes that are running, ready, or blocked Memory management Efficient and effective allocation of main memory to processes Performed by the operating system with support from the hardware

Requirements Relocation Protection Sharing Logical organization Physical organization

Relocation Why/What: Consequences/Constraints: Programmer does not know where the program will be placed in memory when it is executed While the program is executing, it may be swapped to disk and returned to main memory at a different location Consequences/Constraints: Memory references must be translated in the code to actual physical memory address

Protection Why/What: Consequences/Constraints: Protect process from interference by other processes Processes require permission to access memory in another processes address space. Consequences/Constraints: Impossible to check addresses at compile/link time since the program could be relocated Must be checked at run time

Sharing Allow several processes to access the same data Allow multiple process to share the same program text rather than have their own separate copy

Logical Organization Programs organized into modules stack, text, uninitialized data, or logical modules such as libraries, objects, etc. Code modules may be compiled independently Different degrees of protection given to modules read-only, execute-only Share modules

Unix Process Memory Layout Text Initialized Data Uninitialized Data (BSS) Heap Stack Low Address High Address argv[ ], env[ ] Read from program file by exec Zeroed by exec .bss or bss is used by many compilers and linkers as the name of the data segment containing static variables that are filled solely with zero-valued data initially (ie, when execution begins). It is often referred to as the "bss section" or "bss segment". The program loader initializes the memory allocated for the bss section when it loads the program. The above “logical address layout” gets mapped (bound) to physical addresses during (re)loading. http://en.wikipedia.org/wiki/.bss

Physical Organization Memory organized into two levels: main and secondary memory. Memory available for a program plus its data may be insufficient Overlaying allows various modules to be assigned the same region of memory Main memory relatively fast, expensive and volatile Secondary memory relatively slow, cheaper, larger capacity, and non-volatile

Hardware Support How to partition memory How to keep track of which partition belongs to which process How to prevent a process from accessing a partition that has not been allocated to it

Section 7.1 Assumptions Processes are never partially in memory Processes are stored contiguously

Memory Partitioning Virtual Memory Non-Virtual memory approaches Segmentation and/or Paging Non-Virtual memory approaches Partitioning - Fixed and Dynamic simple Paging simple Segmentation

Fixed Partitioning Partition available memory into regions with fixed boundaries Equal-size partitions Process size <= partition size can be loaded into available partition If all partitions are full, the operating system can swap a process out of a partition If program size > partition size, then programmer must use overlays Splitting a large programs into pieces allow for different pieces to be loaded into memory at different times so that each piece could run or execute. Each such piece would be called an overlay. Overlays could be swapped dynamically into and out of memory by the OS. Typically, overlay 0 would run first, followed by overlay 1, and so on.

Fixed Partitioning - Equal Size Main memory use is inefficient Internal Fragmentation - Part of partition unused Operating System 8 M 8 M process 1 3 M Unused 8 M 8 M 8 M

Fixed partitioning - unequal sizes Lessens the problem of Internal Fragmentation Operating System 8 M 2 M process 1 3 M 6 M 8 M 8 M 12 M

Fixed partition: placement algorithm Equal-size partitions because all partitions are of equal size, it does not matter which partition is used Unequal-size partitions can assign each process to the smallest partition within which it will fit queue for each partition processes are assigned in such a way as to minimize wasted memory within a partition Processes are assigned to partitions in such a way to minimized internal fragmentation. Notice that processes are swapped out of memory when no process in memory is ready and other processes need to be brought in from secondary memory. Placement algorithms address where in memory those swapped in processes need to be placed.

One Process Queue per partition New Processes Operating System

One Process Queue for all partitions Operating System New Processes Smallest available partition that will hold the process is selected

Pros and cons of fixed partitioning Fixed partitioning is simple and very little OS support is required. Cons: Since number of partitions is fixed, it limits the degree of multi-programming It is inefficient with small processes. If the sizes of processes is known beforehand then a reasonable size can be decided on.

Dynamic Partitioning Partitions are created dynamically Partitions are of variable size and number Partition sizes are determined based on requests Disadvantages: External Fragmentation - small holes in memory between allocated partitions Placement is more complicated - Must use compaction to shift processes so they are contiguous and all free memory is in one block

Example Dynamic Partitioning

Dynamic partition: placement alg. Operating system must decide which free block to allocate to a process Best-fit algorithm Chooses block that is closest in size to the request Results in minimally sized fragments requiring compaction Worst performer overall Notice that processes are swapped out of memory when no process in memory is read and other processes need to be brought in from secondary memory. Placement algorithms address where in memory those swapped in processes need to be placed.

Dynamic partition: placement alg. First-fit algorithm Starts scanning from beginning and choose first available block that is large enough. May have many process loaded in the front end of memory Fastest Best

Dynamic partition: placement alg. Next-fit Scan memory from the location of the last allocation and chooses the next available block that is large enough More often allocate a block of memory at the end of memory where the largest block is found Compaction is required to obtain a large block at the end of memory Compaction is more frequent Compaction is process of relocating occupied partitions to one end of memory so that they are contiguous and all of free memory is together in one block.

alloc 16K block 8K 8K 12K First Fit 12K 22K Last allocated block (14K) Best Fit 18K 2K 8K 8K 6K 6K Allocated block 14K Free block 14K Next Fit 36K 20K Before After

Buddy System Fixed and dynamic partitioning have their drawbacks Fixed partitioning Limits number of active processes Uses space inefficiently if the sizes of the processes don’t match with that of the partitions Dynamic partitioning More complex to maintain Includes overhead of compaction

Buddy System Entire space available is treated as a single block of size 2U If a request for a block of size s such that 2U-1 < s <= 2U Allocated entire block Otherwise Split block into two equal buddies Continues until smallest block greater than or equal to s becomes available requests rounded to power of two Used in UNIX kernel management. Uses Lazy buddy i.e don’t coalesce immediately to reduce the workload.

Buddy System Advantages: Disadvantage: coalesces adjacent buffers performance (recursive coalescing is expensive) poor api

Buddy System Allocation Begin with one large block Suppose we want a block of size 16 256 128 64 32 16 8 4 2 1

Buddy System Allocation Begin with one large block Recursively subdivide 256 128 64 32 16 8 4 2 1

Buddy System Allocation Begin with one large block Recursively subdivide 256 128 64 32 16 8 4 2 1

Buddy System Allocation Begin with one large block Recursively subdivide 256 128 64 32 16 8 4 2 1

Buddy System Allocation Begin with one large block Yield 2 blocks size 16 256 256 128 128 64 64 32 32 16 16 8 8 4 4 2 2 1 1

Buddy System Allocation Begin with one large block Yield 2 blocks size 16 One of those blocks can be given to the program 256 128 64 32 16 8 4 2 1

Deallocation and Coalescing 256 128 64 32 16 8 4 2 1

Deallocation and Coalescing When a block becomes free, it tries to rejoin its buddy A bit in its buddy tells whether the buddy is free If so, they can glue together and make a block twice as big 256 256 128 128 64 64 32 32 16 16 8 8 4 4 2 2 1 1

Deallocation and Coalescing 256 128 64 32 16 8 4 2 1

Deallocation and Coalescing 256 128 64 32 16 8 4 2 1

Deallocation and Coalescing 256 128 64 32 16 8 4 2 1

Deallocation and Coalescing Coalescing strategies: Prompt – at the point of deallocation (right away) Delayed – wait until it’s necessary (at allocation) Thorough – coalesce the entire heap Demand – coalesce only to satisfy an allocation request 256 128 64 32 16 8 4 2 1

Relocation Absolute or physical memory addresses are Based on the partitions into which the process is loaded Determined when A process is first loaded A process is reloaded after being swapped out Compaction occurs Not knowable at compile or link time Executable files use logical or relocatable addresses Relative addresses are one example of logical addresses Logical addresses must be translated to physical addresses at run time

Hardware Support for Relocation Relative address Process Control Block Base Register Adder Program Absolute address Bounds Register Comparator Data Interrupt to operating system Stack Process image in main memory

Registers Used during Execution Assigned when a process enters the running state Base register – starting address for the process Bounds register – ending location of the process (where do the values come from?) Translate from relative addresses to physical addresses Base register is added to each relative address to produce absolute address If absolute address exceeds bounds register, an interrupt is generated to the operating system

Assumptions – Sections 7.3 and 7.4 Processes are (still) never partially in memory Processes are not necessarily stored contiguously

Simple Paging Partition memory into equally sized pieces Pieces of memory are called frames Partition each process into frame-sized pieces Pieces of a process are called pages Pages are loaded in frames when process is loaded Not necessarily contiguous! No external fragmentation Internal fragmentation limited to last page Operating system maintains a page table for each process Contains an entry for each process page specifying the frame into which the page is loaded (when and how is the page table created?) Physical address = frame number * frame size + offset

Memory frames 1 2 3 4 5 6 7 A.0 A.1 B.0 B.1 A.0 A.1 C.0 B.0 B.1 A.0 A.1 1 2 3 4 5 6 7 C.0 A.0 A.1 C.0 A.0 A.1 D.0 D.1 D.2 D.3 1 2 3 5 6 Process D Page Table 7 Free Frame List Simple paging example Load A Load B Load C Swap out B Load D

Address Translation in Paging To make it efficient, the page (frame) size is a power of 2 Given a (page #, offset), find (frame #, offset) 2n > number of pages => n bits to hold the page number 2k > number of frames => k bits to hold the frame number 2m = number of bytes in a page => m bits to determine the offset within a page Logical address is concatenation nm Physical address is concatenation km where k is the frame corresponding to page “n” (obtained from the page table)

Address Translation in Paging Page # (n bits) Offset (m bit) 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 Process Page Table: 0 1 0 0 1 0 1 2 Frame # (k bits) Logical address = page # + offset 0 1 0 0 1 1 0 1 0 1 0 1 . . . 0 1 0 0 1 1 0 1 1 1 0 1 1 1 1 0 Physical Address

Address translation e.g. Main memory Frame # Physical address Value E 1 F 2 G 3 H 4 ? 5 6 7 8 A 9 B 10 C 11 D 12 13 14 15 16 M 17 N 18 O 19 P 20 I 21 J 22 K 23 L Page # Logical Address Value A 1 B 2 C 3 D 4 E 5 F 6 G 7 H 8 I 9 J 10 K 11 L 12 M 13 N 14 O 15 P Page # Frame # 2 1 5 3 4 Page table Have a 4 bit logical address such that 2 bit are used for page # and 2 bits are used for offset into a page Examine the table with logical addresses. This address contains a value that we need. What is its actual address in main memory? Logical address 6 = 0110 Page size = 4 => 2 bits for offset (i.e. 4 addresses per page) Number of pages = 4 => 2 bits for page number Need a 5 bit physical address such that 3 bits are used for frame number and 2 bits are used for offset into a frame Frame size = page size = 4 => 2 bits of offset # of frames = 6 => 3 bits for frame number 01 10 a. Replace 01 i.e. page number by frame number i.e. 000 b. 000 10 is the address => 2 Q4 Process

Segmentation Partition each process logical parts Pieces of a process are called segments For example, text segment, data segment, stack segment Segments are loaded in memory when process is loaded Not necessarily contiguous No internal fragmentation External fragmentation is a concern OS maintains a segment table for each process Contains an entry for each segment specifying the starting and ending addresses (when and how is the segment table created?) Physical address = starting address + offset

Address translation in segmentation Offset 0 0 0 1 0 0 1 0 1 1 1 1 0 0 0 0 Process Segment Table: 001011101110 0000010000000000 1 + Logical address = segment # + offset Base address = starting address of segment Length = maximum length of segment Compare offset with length of segment. If offset >= length of segment, address is invalid. Otherwise add offset to base address to obtain the physical address. 011110011110 0010000000100000 Length Base Address 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 Physical Address