Download presentation
Presentation is loading. Please wait.
Published byMagdalene Parker Modified over 9 years ago
1
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging
2
Copyright ©: Nahrstedt, Angrave, Abdelzaher 2 More on dynamic partitions: Bit Maps versus Linked Lists Part of memory with 5 processes, 3 holes tick marks show allocation units shaded regions are free Corresponding bit map & linked list
3
Copyright ©: Nahrstedt, Angrave, Abdelzaher 3 Four neighbor combinations for the terminating process X More on dynamic partitions: Bit Maps versus Linked Lists
4
Copyright ©: Nahrstedt, Angrave, Abdelzaher 4 Bitmap and linked list Which one occupies more space? Depends on the individual memory allocation scenario. In most cases, bitmap usually needs more space. In terms of processing time, on average bitmap is faster because it just needs to update the corresponding bits Which one is faster to find a free hole? On average, linked list is faster because we can link all free holes together More on dynamic partitions: Bit Maps versus Linked Lists
5
Copyright ©: Nahrstedt, Angrave, Abdelzaher 5 Storage Placement Algorithms Best fit Use smallest free space equal to or larger than the need Rationale? First fit Use the first free space found that is large enough to meet the need Rationale? Worst fit Use the largest available free space Rationale?
6
Copyright ©: Nahrstedt, Angrave, Abdelzaher 6 Each strategy has problems Best fit Creates small free spaces that can’t be used Worst Fit Removes large free spaces, so large programs may not fit First Fit Creates average size free spaces Storage Placement Algorithms
7
Copyright ©: Nahrstedt, Angrave, Abdelzaher 7 Mentioned approaches can perform better or worse depending on exact sequence of process arrivals and size of those processes First-fit is very simple and usually performs very well Surprisingly best-fit is usually the worst performer: it guarantees that smallest amount of memory is always wasted but memory is quickly fragmented in pieces too small to satisfy further requests Storage Placement Algorithms
8
Copyright ©: Nahrstedt, Angrave, Abdelzaher 8 How Bad Is Fragmentation? Statistical argument: processes have random sizes Assume to use first-fit and N blocks are allocated 0.5 N blocks will be wasted because of internal fragmentation Known as 50% RULE
9
Copyright ©: Nahrstedt, Angrave, Abdelzaher 9 Dynamic partitions: solve fragmentation with compaction MonitorJob 3 Free Job 5Job 6Job 7Job 8 1 MonitorJob 3 Free Job 5Job 6Job 7Job 8 2 MonitorJob 3 Free Job 5Job 6Job 7Job 8 3 MonitorJob 3 Free Job 5Job 6Job 7Job 8 4 MonitorJob 3 Free Job 5Job 6Job 7Job 8 5
10
Copyright ©: Nahrstedt, Angrave, Abdelzaher 10 Fixed vs dynamic partitions Fixed partitions suffer from internal fragmentation Dynamic partitions suffer from external fragmentation Compaction suffers from overhead
11
Copyright ©: Nahrstedt, Angrave, Abdelzaher 11 Question What if there are more processes than what could fit into the memory?
12
Copyright ©: Nahrstedt, Angrave, Abdelzaher 12 Swapping Monitor Disk User Partition
13
Copyright ©: Nahrstedt, Angrave, Abdelzaher 13 Swapping Monitor Disk User 1 User Partition
14
Copyright ©: Nahrstedt, Angrave, Abdelzaher 14 Swapping Monitor User 1 Disk User 1 User Partition
15
Copyright ©: Nahrstedt, Angrave, Abdelzaher 15 Swapping Monitor User 2 User 1 Disk User 1 User Partition
16
Copyright ©: Nahrstedt, Angrave, Abdelzaher 16 Swapping Monitor Disk User 2 User Partition User 1
17
Copyright ©: Nahrstedt, Angrave, Abdelzaher 17 Swapping Monitor Disk User 2 User Partition User 1
18
Copyright ©: Nahrstedt, Angrave, Abdelzaher 18 Swapping Monitor Disk User 1 User 2 User Partition User 1
19
Copyright ©: Nahrstedt, Angrave, Abdelzaher 19 Both fixed and dynamic size partitioning are inefficient; in fact fixed partitioning suffers internal fragmentation while dynamic partitioning suffers external fragmentation Suppose to partition physical main memory in small equal-size chunks (frames) Suppose each process memory is also divided into small fixed-size chunks (pages) of the same size Pages of a process can be mapped to available frames of memory We would like to allocate non-contiguous frames to a process can we do that? if this is the case, is the system suffering internal or external fragmentation? Paging
20
Copyright ©: Nahrstedt, Angrave, Abdelzaher 20 Suppose to partition physical main memory in small equal-size chunks (frames) Suppose each process memory is also divided into small fixed-size chunks (pages) of the same size Pages of a process can be mapped to available frames of memory We would like to allocate non-contiguous frames to a process can we do that? if this is the case, is the system suffering internal or external fragmentation? We can implement such a scheme as follows: The programmer uses a contiguous logical address space (Virtual Memory divided in pages) System uses a process page table to identify page frame mapping for each process Translation from logical to physical addressing is performed by CPU at run-time A logical address is composed of (page #, offset); the processor uses active process page table to produce a physical address (frame #, offset) Paging
21
31234 Disk Main Memory Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Main Memory Request Page 3 Paging
22
31 12 3 4 Disk Main Memory Virtual Memory Stored on Disk 1234567 8 1234 1 2 3 4 Page Table VMFrame Main Memory Request Page 1 Paging
23
31 1 6 2 3 4 Disk Main Memory Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Main Memory Request Page 6 Paging
24
31 1 6 2 3 4 Disk Main Memory Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Main Memory Request Page 2 2 Paging
25
Copyright ©: Nahrstedt, Angrave, Abdelzaher 25 31 1 6 2 3 4 Disk Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Main Memory Store Virtual Memory Page 1 to disk 2 Request Address within Virtual Memory 8 Paging
26
Copyright ©: Nahrstedt, Angrave, Abdelzaher 26 31 6 2 3 4 Disk Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VM Frame Main Memory 2 Load Virtual Memory Page 8 to main memory 8 Paging
27
Contents(P,D) Contents(F,D) PDFD P→F 0 1 0 1 1 0 1 Page Table Physical Memory Virtual Address (P,D) Physical Address (F,D) P F D D P Virtual Memory Page Mapping Hardware
28
Contents(3006) Contents(4006) 003006004006 3→43→4 0 1 0 1 1 0 1 Page Table Virtual Memory Physical Memory Virtual Address (004006) Physical Address (F,D) 003 004 006 3 Page size 1000 Number of Possible Virtual Pages 1000
29
Page Fault If CPU tries to access a virtual page that is not mapped into any memory frame, a page fault is triggered. Page fault handler (in OS’s VM subsystem) Find if there is any free memory frame available If not, evict some resident page to disk (swapping space) Allocate a free memory frame Load the faulted virtual page to the prepared memory frame Update the page table
30
More about Paging It is important to use a page size that is power of 2: It is quite easy to implement a hw function that performs run-time address translation Page size is 2 n usually 512 bytes, 1 KB, 2 KB, 4 KB, or 8 KB E.g., 32 bit VM address may have 2 20 (1M) pages with 4k (2 12 ) bytes per page Page table: Assuming each page table entry (PTE) requires 4bytes, 2 20 pages take 2 22 bytes (4 MB) The page table of active process must be resident in main memory Page Table base register must be changed for context switch No external fragmentation; internal fragmentation on last page only
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.