Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
CS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Memory Management.
Memory Management 2010.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory ManagementCS-502 Fall Memory Management CS-502 Operating Systems Fall 2006 (Slides include materials from Operating System Concepts, 7 th.
Memory Management CSCI 444/544 Operating Systems Fall 2008.
Segmentation CS 537 – Introduction to Operating Systems.
CSE378 Virtual memory.1 Evolution in memory management techniques In early days, single program ran on the whole machine –used all the memory available.
CSE451 Operating Systems Winter 2012 Memory Management Mark Zbikowski Gary Kimura.
Operating Systems Chapter 8
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Operating System Main Memory.
Operating Systems ECE344 Ding Yuan Memory Management Lecture 7: Memory Management.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
CE Operating Systems Lecture 14 Memory management.
Chapter 4 Memory Management Virtual Memory.
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
CSE 451: Operating Systems Fall 2010 Module 10 Memory Management Adapted from slides by Chia-Chi Teng.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Swap Space and Other Memory Management Issues Operating Systems: Internals and Design Principles.
CSE 451: Operating Systems Winter 2015 Module 11 Memory Management Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska, Levy,
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
CS703 - Advanced Operating Systems By Mr. Farhan Zaidi.
Memory Management.
CSE 120 Principles of Operating
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
CS703 - Advanced Operating Systems
COMBINED PAGING AND SEGMENTATION
CSE451 Operating Systems Winter 2011
CSE 451: Operating Systems Winter 2010 Module 10 Memory Management
CSE 451: Operating Systems Winter 2014 Module 11 Memory Management
PA1 is out Best by Feb , 10:00 pm Enjoy early
Chapter 8: Main Memory.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Operating System Main Memory
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2005 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Winter 2004 Module 10.5 Segmentation
CSE 451: Operating Systems Autumn 2004 Memory Management
CSE 451: Operating Systems Winter 2007 Module 10 Memory Management
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS444/CS544 Operating Systems
CSE 451: Operating Systems Autumn 2009 Module 10 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management CSE451 Andrew Whitaker.
CSE 451: Operating Systems Winter 2004 Module 10 Memory Management
Evolution in memory management techniques
CS703 - Advanced Operating Systems
Evolution in memory management techniques
Evolution in memory management techniques
CSE451 Operating Systems Winter 2009
CSE 451: Operating Systems Autumn 2010 Module 10 Memory Management
Page Main Memory.
Presentation transcript:

Memory Management CSE451 Andrew Whitaker

Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs share memory? FirefoxWordjavac

Goals of Memory Management Allocate scarce memory resources among competing processes  While maximizing memory utilization and system throughput Provide a convenient abstraction for programming (and for compilers, etc.)  Hide sharing  Hide scarcity Provide isolation between processes

Problem: Memory Relocation A program “sees” different memory addresses, depending on who else is running FirefoxWordjavacPacManFirefoxWordjavacPacMan 3 MB 0 MB

Virtual Addressing Add a layer of indirection between user addresses and hardware addresses Programmer sees a virtual address space, which always starts at zero FirefoxWord PacMan FirefoxWordPacMan 0MB 1MB 0MB 1MB 0MB 1MB Physical Addresses Virtual Addresses

Memory Management: Factors to Consider Workload characteristics  Batch systems, Internet systems, interactive systems Level of hardware support No HW support Base & Limit registers Paging, Segmentation, TLBs

Strategy #1: Swapping Only one program occupies memory at once On a context switch:  Save old program’s memory to disk  Load next program’s memory from disk Either eagerly or lazily Advantage: very simple Disadvantage: disk is exceedingly slow!

Why Use Swapping? No Hardware Support  MIT’s CTSS operating system (1961) was built this way Memory was so small that only one job would fit! Long-lived batch jobs  Job switching costs become insignificant as the quantum grows large

How Slow is Swapping?

Strategy #2: Fixed partitions Physical memory is broken up into fixed partitions  All partitions are the same size  Partitions never change Advantages  Simple

Disadvantages of Fixed Partitions Internal fragmentation: memory in a partition not used by its owning process isn’t available to other processes Partition size problem: no one size is appropriate for all processes  Tradeoff between fragmentation and accommodating large programs

Implementing Fixed Partitions partition 0 partition 1 partition 2 partition 3 partition 4 partition 5 0 2K 4K 6K 8K 10K physical memory offset + virtual address P3’s base: 6K base register 2K <? no raise protection fault limit register yes Requires hardware-supported base and limit registers

Strategy #3: Variable Partitions Obvious next step: physical memory is broken up into variable-sized partitions Advantages  No internal fragmentation Simply allocate partition size to be just big enough for process Problems  We must know in advance the program size  External fragmentation As we load and unload jobs, holes are left scattered throughout physical memory

Mechanics of Variable Partitions partition 0 partition 1 partition 2 partition 3 partition 4 physical memory offset + virtual address P3’s base base register P3’s size limit register <? raise protection fault no yes

Dealing with External Fragmentation partition 0 partition 1 partition 2 partition 3 partition 4 Swap a program out Re-load it, adjacent to another Adjust its base register “Lather, rinse, repeat” Ugh partition 0 partition 1 partition 2 partition 3 partition 4

Strategy #4: Paging Use fixed-size units of memory (called pages) for both virtual and physical memory frame 0 frame 1 frame 2 frame Y physical address space … page 0 page 1 page 2 page X virtual address space … page 3

Paging Advantages Paging reduces internal fragmentation  How? Paging eliminates external fragmentation  How? Disadvantages?

Address translation A virtual address has two parts: virtual page number and offset Address translation only applies to the virtual page number  Why? Virtual page number (VPN) is index into a page table  Page table entry contains page frame number (PFN)  Physical address is PFN::offset offset virtual page #

Page Tables Managed by the OS Map a virtual page number (VPN) to a page frame number (PFN) VPN is simply an index into the page table One page table entry (PTE) per page in virtual address space  i.e., one PTE per VPN

Mechanics of address translation page frame 0 page frame 1 page frame 2 page frame Y … page frame 3 physical memory offset physical address page frame # page table offset virtual address virtual page #

Example of address translation Assume 32 bit addresses  assume page size is 4KB (4096 bytes, or 2 12 bytes)  VPN is 20 bits long (2 20 VPNs), offset is 12 bits long Let’s translate virtual address 0x  VPN is 0x13325, and offset is 0x328  assume page table entry 0x13325 contains value 0x03004 page frame number is 0x03004 VPN 0x13325 maps to PFN 0x03004  physical address = PFN::offset = 0x

Page Table Entries (PTEs) PTE’s control address mappings  The page frame number indicates the physical page  The valid bit says whether or not the PTE can be used says whether or not a virtual address is valid  The referenced bit says whether the page has been accessed recently  The modified bit says whether or not the page is dirty it is set when a write to the page has occurred  The protection bits control which operations are allowed read, write, execute page frame numberprotMRV

Paging Issues (Stay Tuned…) How to make it fast?  Accessing the page table on each memory reference is not workable How to deal with memory scarcity  Virtual memory How do we control the memory overhead of page tables?  Need one PTE per page in virtual address space 32 bit AS with 4KB pages = 2 20 PTEs = 1,048,576 PTEs 4 bytes/PTE = 4MB per page table  OS’s typically have separate page tables per process 25 processes = 100MB of page tables

Strategy #5: Segmentation Instead of a flat address space, programmers see a collection of segments Virtual address = segment #, offset stackmain()heap Shared library offset segment #

Why Segments? Facilitates sharing and re-use  Unlike a page, a segment is a logical entity Segments can have arbitrary size

Hardware support Segmentation is a generalization of variable- sized partitions  Variable sized partitions: 1 segment per process  Segmentation: N segments per process So, we need multiple base/limit pairs These are stored in a segment table  Segments named by segment #, used as index into table A virtual address is  Offset of virtual address added to base address of segment to yield physical address

Segment lookups segment 0 segment 1 segment 2 segment 3 segment 4 physical memory segment # + virtual address <? raise protection fault no yes offset baselimit segment table

Pros and cons Yes, it’s “logical” and it facilitates sharing and reuse But it has all the horror of a variable partition system  e.g., external fragmentation

Intel x86: Combining segmentation and paging Use segments to manage logical units Use pages to partition segments into fixed-size chunks  Each segment has its own page table There is a page table per segment, rather than per user address space  Memory allocation becomes easy once again no contiguous allocation, no external fragmentation Segment #Page #Offset within page Offset within segment

Who Uses This? Nobody :-)  Only supported by x86, so portability is a problem Linux primarily uses:  1 kernel code segment, 1 kernel data segment  1 user code segment, 1 user data segment