Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.

Similar presentations


Presentation on theme: "1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible."— Presentation transcript:

1 1 Memory Management

2 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible unequal) partitions. Problem: Fragmentation

3 3 Fixed Partition Allocation Implementation Issues  Separate input queue for each partition Inefficient utilization of memory  when the queue for a large partition is empty but the queue for a small partition is full. Small jobs have to wait to get into memory even though plenty of memory is free.  One single input queue for all partitions. Allocate a partition where the job fits in.  Best Fit  Available Fit

4 4 Relocation  Correct starting address when a program should start in the memory  Different jobs will run at different addresses  Logical addresses, Virtual addresses Logical address space, range (0 to max)  Physical addresses, Physical address space range (R+0 to R+max) for base value R.  Memory-management unit (MMU) map virtual to physical addresses.  Relocation register Mapping requires hardware (MMU) with the base register

5 5 Relocation Register Memory Base Register CPU Instruction Address + BA MA MA+BA Physical Address Logical Address

6 6 Question 1 - Protection Problem: How to prevent a malicious process to write or jump into other user's or OS partitions Solution: Base bounds registers

7 7 Base Bounds Registers Memory Bounds RegisterBase Register CPU Address <+ Memory Address MA Logical Address LA Physical Address PA Fault Base Address Limit Address MA+BA Base Address BA

8 8 Question 2 What if there are more processes than what could fit into the memory?

9 9 Swapping Monitor Disk User Partition

10 10 Swapping Monitor Disk User 1 User Partition

11 11 Swapping Monitor User 1 Disk User 1 User Partition

12 12 Swapping Monitor User 2 User 1 Disk User 1 User Partition

13 13 Swapping Monitor Disk User 2 User Partition User 1

14 14 Job 6 Solve Fragmentation w. Compaction MonitorJob 3 Free Job 5Job 6Job 7Job 8 5 MonitorJob 3 Free Job 5Job 6Job 7Job 8 6 MonitorJob 3Job 5Job 7Job 8 7 MonitorJob 3 Free Job 5Job 6Job 7Job 8 8 MonitorJob 3 Free Job 5Job 6Job 7Job 8 9

15 15 Storage Management Problems  Fixed partitions suffer from internal fragmentation  Variable partitions suffer from external fragmentation  Compaction suffers from overhead

16 16 Paging  Provide user with virtual memory that is as big as user needs  Store virtual memory on disk  Cache parts of virtual memory being used in real memory  Load and store cached virtual memory without user program intervention

17 17 Benefits of Virtual Memory  Use secondary storage($) Extend DRAM($$$) with reasonable performance  Protection Programs do not step over each other  Convenience Flat address space Programs have the same view of the world Load and store cached virtual memory without user program intervention  Reduce fragmentation: make cacheable units all the same size (page)

18 18 Paging Request 31 2 3 4 Disk Cache Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Real Memory Request Address within Virtual Memory Page 3

19 19 Paging 31 12 3 4 Disk Cache Virtual Memory Stored on Disk 1234567 8 1234 1 2 3 4 Page Table VMFrame Real Memory Request Address within Virtual Memory Page 1

20 20 Paging 31 1 6 2 3 4 Disk Cache Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Real Memory Request Address within Virtual Memory Page 6

21 21 Paging 31 1 6 2 3 4 Disk Cache Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Real Memory Request Address within Virtual Memory Page 2 2

22 22 Paging 31 1 6 2 3 4 Disk Cache Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VMFrame Real Memory Store Virtual Memory Page 1 to disk 2 Request Address within Virtual Memory 8

23 23 Paging 31 6 2 3 4 Disk Cache Virtual Memory Stored on Disk 12345678 1234 1 2 3 4 Page Table VM Frame Real Memory 2 Load Virtual Memory Page 8 to cache 8

24 24 Page Mapping Hardware Contents(P,D) Contents(F,D) PD FD P-> F 0 1 0 1 1 0 1 Page Table Virtual Memory Physical Memory Virtual Address (P,D) Physical Address (F,D) P F D D P

25 25 Page Mapping Hardware Contents(4006) Contents(5006) 004006 005006 4-> 5 0 1 0 1 1 0 1 Page Table Virtual Memory Physical Memory Virtual Address (004006) Physical Address (F,D) 004 005 006 4 Page size 1000 Number of Possible Virtual Pages 1000

26 26 Paging Issues  size of page is 2 n, usually 512, 1k, 2k, 4k, or 8k E.g. 32 bit VM address may have 2 20 (1 meg) pages with 4k (2 12 ) bytes per page

27 27 Paging Issues Question Physical memory size: 32 MB (2^25 ) Page size 4K bytes How many pages?  2^13 Page Table base register must be changed for context switch Why?  Different page table NO external fragmentation Internal fragmentation on last page ONLY

28 28 Paging - Caching the Page Table  Can cache page table in registers and keep page table in memory at location given by a page table base register  Page table base register changed at context switch time

29 29 Paging Implementation Issues  Caching scheme can use associative registers, look- aside memory or content-addressable memory TLB  Page address cache (TLB) hit ratio: percentage of time page found in associative memory  If not found in associative memory, must load from page tables: requires additional memory reference

30 30 Page Mapping Hardware PD FD P-> F 0 1 0 1 1 0 1 Page Table Virtual Memory Address (P,D) Physical Address (F,D) P Associative Look Up PF First

31 31 Page Mapping Hardware 004006 009006 004-> 009 0 1 0 1 1 0 1 Page Table Virtual Memory Address (P,D) Physical Address (F,D) 4 Associative Look Up 112 7 19 3 6 3 7 First Table organized by LRU 49

32 32 Page Mapping Hardware 004006 009006 004-> 009 0 1 0 1 1 0 1 Page Table Virtual Memory Address (P,D) Physical Address (F,D) 4 Associative Look Up 112 4 19 3 9 3 7 First Table organized by LRU

33 TLB Function  If a virtual address is presented to MMU, the hardware checks TLB by comparing all entries simultaneously (in parallel).  If match is valid, the page is taken from TLB without going through page table.  If match is not valid MMU detects miss and does an ordinary page table lookup. It then evicts one page out of TLB and replaces it with the new entry, so that next time that page is found in TLB.

34 Multilevel Page Tables  Since the page table can be very large, one solution is to page the page table  Divide the page number into An index into a page table of second level page tables A page within a second level page table  Advantage No need to keeping all the page tables in memory all the time Only recently accessed memory’s mapping need to be kept in memory, the rest can be fetched on demand

35 Multilevel Page Tables Directory...... pte.................. dirtableoffset Virtual address

36 36 Multilevel Paging

37 Example Addressing on a Multilevel Page Table System A logical address (on 32-bit x86 with 4k page size) is divided into A page number consisting of 20 bits A page offset consisting of 12 bits Divide the page number into A 10-bit page number A 10-bit page offset

38 38 Addressing on Multilevel Page Table


Download ppt "1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible."

Similar presentations


Ads by Google