Download presentation
Presentation is loading. Please wait.
1
Avishai Wool lecture 6 - 1 Introduction to Systems Programming Lecture 6 Memory Management
2
Avishai Wool lecture 6 - 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile (does not get erased when power goes off) Memory hierarchy –small amount of fast, expensive memory – cache –some medium-speed, medium price main memory –gigabytes of slow, cheap disk storage Memory manager handles the memory hierarchy
3
Avishai Wool lecture 6 - 3 The Memory Hierarchy Registers On-chip Cache Main Memory Magnetic (Hard) Disk Magnetic Tape 1 nsec 2 nsec 10 nsec 10 msec 100 sec Access TimeCapacity < 1 KB 4 MB 512MB-2GB 200GB-1000GB multi-TB Other types of memory: ROM, EEPROM, Flash RAM
4
Avishai Wool lecture 6 - 4 Basic Memory Management An operating system with one user process (Palm computers) (MS-DOS) BIOS
5
Avishai Wool lecture 6 - 5 Why is multi-programming good? Running several processes in parallel seems to “lets users get more done” Can we show a model that can quantify this? From the systems’ perspective: Multi-programming improves utilization
6
Avishai Wool lecture 6 - 6 Modeling Multiprogramming A process waits for I/O a fraction p of time –(1-p) of the time is spent in CPU bursts Degree of Multiprogramming: The number n of processes in memory Pr(CPU busy running processes) = utilization Utilization = 1 - p n For an interactive process, p=80% is realistic
7
Avishai Wool lecture 6 - 7 CPU utilization as a function of number of processes in memory Degree of multiprogramming
8
Avishai Wool lecture 6 - 8 Using the simple model Assume 32MB of memory OS uses 16MB, user processes use 4MB 4-way multi-programming possible Model predicts utilization = 1- 0.8 4 = 60% If we add another 16MB 8-way multi- programming utilization = 83%
9
Avishai Wool lecture 6 - 9 Real-Memory Partitioning
10
Avishai Wool lecture 6 - 10 Multiprogramming with Fixed Partitions (a)Separate input queues for each partition (Used in IBM OS/360) (b)Single input queue
11
Avishai Wool lecture 6 - 11 Problems with Fixed Partitions Separate queues: memory not used efficiently if many process in one class and few in another Single queue: small processes can use up a big partition, again memory not used efficiently
12
Avishai Wool lecture 6 - 12 Basic issues in multi-programming Programmer, and compiler, cannot be sure where process will be loaded in memory –address locations of variables, code routines cannot be absolute Relocation: the mechanism for fixing memory references in memory Protection: one process should not be able to access another processes’ memory partition
13
Avishai Wool lecture 6 - 13 Relocation in Software: Compiler+OS Compiler assumes program loaded at address 0. Compiler/Linker inserts a relocation table into the binary file: –positions in code containing memory addresses At load (part of process creation): –OS computes offset = lowest memory address for process –OS modifies the code - adds offset to all positions listed in relocation table
14
Avishai Wool lecture 6 - 14 Relocation example Relocation table: 6, 12, … mov ax, *200 mov bx, *100 1040 1034 1028 1024 Load 1124 1224 Relocate: add 1024 mov ax, *200 mov bx, *100 16 10 4 0 Compile time CreateProcess movregAddress (4bytes)
15
Avishai Wool lecture 6 - 15 Protection – Hardware Support Memory partitions have ID (protection code) PSW has a “protection code” field (e.g. 4 bits) Saved in PCB as part of process state CPU checks each memory access: if protection code of address != protection code of process error
16
Avishai Wool lecture 6 - 16 Alternative hardware support : Base and Limit Registers Special CPU registers: “base”, “limit” Address locations added to base value to map to physical address –Replaces software relocation –OS sets the base & limit registers during CreateProcess Access to address locations over limit value is a CPU exception error –solves protection too Intel 8088 used a weak version of this: base register but no limit
17
Avishai Wool lecture 6 - 17 Swapping
18
Avishai Wool lecture 6 - 18 Swapping Fixed partitions are too inflexible, waste memory Next step up in complexity: dynamic partitions Allocate as much memory as needed by each process Swap processes out to disk to allow more multi- programming
19
Avishai Wool lecture 6 - 19 Swapping - example Memory allocation changes as –processes come into memory –leave memory Shaded regions are unused memory
20
Avishai Wool lecture 6 - 20 How much memory to allocate? (a) Allocating space for growing data segment (b) Allocating space for growing stack & data segment
21
Avishai Wool lecture 6 - 21 Issues in Swapping When a process terminates – compact memory? –Move all processes above the hole down in memory. Can be very slow: 256MB of memory, copy 4 bytes in 40ns compacting memory in 2.7 sec Almost never used Result: OS needs to keep track of holes. Problem to avoid: memory fragmentation.
22
Avishai Wool lecture 6 - 22 Swapping Data Structure: Bit Maps Part of memory with 5 processes, 3 holes –tick marks show allocation units –shaded regions are free Corresponding bit map
23
Avishai Wool lecture 6 - 23 Properties of Bit-Map Swapping Memory of M bytes, allocation unit is k bytes bitmap uses M/k bits = M/8k bytes. Could be quite large. E.g., allocation unit is 4 bytes Bit map uses 1/32 of memory Searching bit-map for a hole is slow
24
Avishai Wool lecture 6 - 24 Swapping Data Structure: Linked Lists Variant #1: keep a list of blocks (process=P, hole=H)
25
Avishai Wool lecture 6 - 25 What Happens When a Process Terminates? Merge neighboring holes to create a bigger hole
26
Avishai Wool lecture 6 - 26 Variant #2 Keep separate lists for processes and for holes E.g., Process information can be in PCB Maintain hole list inside the holes Process A size nextprev next size prev Hole 1 Hole 2
27
Avishai Wool lecture 6 - 27 Hole Selection Strategy We have a list of holes of sizes 10, 20, 10, 50, 5 A process that needs size 4. Which hole to use? First fit : pick the 1 st hole that’s big enough (use hole of size 10) Break up the hole into a used piece and a hole of size 10 - 4 = 6 Simple and fast
28
Avishai Wool lecture 6 - 28 Best Fit For a process of size s, use smallest hole that has size(hole) >= s. In example, use last hole, of size 5. Problems: –Slower (needs to search whole list) –Creates many tiny holes that fragment memory Can be made as fast as first fit if blocks sorted by size (but then slower termination processing)
29
Avishai Wool lecture 6 - 29 Other Options Worst fit: find the biggest hole that fits. –Simulations show that this is not very good Quick Fit: maintain separate lists for common block sizes. –Improved performance of “find-hole” operation –More complicated termination processing
30
Avishai Wool lecture 6 - 30 Related Problems The hole-list system is used in other places: C language dynamic memory runtime system –malloc() / calloc(), or C++ “new” keyword –free() File systems can use this type of system to maintain free and used blocks on the disk.
31
Avishai Wool lecture 6 - 31 Virtual Memory
32
Avishai Wool lecture 6 - 32 Main Idea Processes use virtual address space (e.g., 00000000-FFFFFFFF for 32-bit addresses). Every process has its own address space The address space of each process can be larger than physical memory.
33
Avishai Wool lecture 6 - 33 Memory Mapping Only part of the virtual address space is mapped to physical memory at any time. Parts of processes’ memory content is on disk. Hardware & OS collaborate to move memory contents to and from disk.
34
Avishai Wool lecture 6 - 34 Advantages of Virtual Memory No need for software relocation: process code uses virtual addresses. Solves protection requirement: Impossible for a process to refer to another process’s memory. For virtual memory protection to work: –Per-process memory mapping (page table) –Only OS can modify the mapping
35
Avishai Wool lecture 6 - 35 Hardware support: the MMU (Memory Management Unit)
36
Avishai Wool lecture 6 - 36 Example 16-bit memory addresses Virtual address space size: 64 KB Physical memory: 32 KB (15 bit) Virtual address space split into 4KB pages. –16 pages Physical memory is split into 4KB page frames. –8 frames
37
Avishai Wool lecture 6 - 37 Paging The relation between virtual addresses and physical memory addresses given by the page table OS maintains table Page table per process MMU uses table
38
Avishai Wool lecture 6 - 38 Example (cont) CPU executes the command mov rx, *5 MMU gets the address “5”. Virtual address 5 is in page 0 (addresses 0-4095) Page 0 is mapped to frame 2 (physical addresses 8192-12287). MMU puts the address 8197 (=8192+5) on the bus.
39
Avishai Wool lecture 6 - 39 Page Faults What if CPU issues mov rx, *32780 That page (page 8) is un-mapped (not in any frame) MMU causes a page fault (interrupt to CPU) OS handles the page fault: –Evict some page from a frame –Copy the requested page from disk into the frame –Re-execute instruction
40
Avishai Wool lecture 6 - 40 How the MMU Works Splits 32-bit virtual address into –A k-bit page number: the top k MSB –A (32-k) bit offset Uses page number as index into page table, and adds offset. Page table has 2 k pages. Each page is of size 2 32-k.
41
Avishai Wool lecture 6 - 41 4-bit page number
42
Avishai Wool lecture 6 - 42 Issues with Virtual Memory Page table can be very large: –32 bit addresses, 4KB pages (12-bit offsets) over 1 million pages –Each process needs its own page table Page lookup has to be very fast: –Instruction in 4ns page table lookup should be around 1ns Page fault rate has to be very low.
43
Avishai Wool lecture 6 - 43 Concepts for review Degree of multi-programming Processor utilization Fixed partitions Code relocation Memory protection Dynamic partitions – Swapping Memory fragmentation Data structures: bitmaps; list of holes First-fit/worst-fit/best-fit Virtual memory Address space MMU Pages and Frames Page table Page fault Page lookup
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.