Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory.

Similar presentations


Presentation on theme: "Memory."— Presentation transcript:

1 Memory

2 Memory Allocation Compile for overlays Compile for fixed Partitions
Separate queue per partition Single queue Relocation and variable partitions Dynamic contiguous allocation (bit maps versus linked lists) Fragmentation issues Swapping Paging

3 Overlays Secondary Storage Overlay 1 0K Main Program Overlay Manager
Overlay Area Overlay 2 Overlay 3 12k

4 Multiprogramming with Fixed Partitions
0k Divide memory into n (possibly unequal) partitions. Problem: Fragmentation 4k 16k 64k Free Space 128k

5 Fixed Partitions Legend 0k Free Space 4k 16k Internal fragmentation
(cannot be reallocated) 64k 128k

6 Fragmentation Internal Fragmentation External Fragmentation
When an allocated block is larger than data it holds External Fragmentation When aggregate free space would be large enough to satisfy request but no single free block is large enough Monitor Job 3 Free Job 5 Job 6 Job 7 Job 8 Legend Free Space

7 Fixed Partition Allocation Implementation Issues
Separate input queue for each partition Requires sorting the incoming jobs and putting them into separate queues Inefficient utilization of memory when the queue for a large partition is empty but the queue for a small partition is full. Small jobs have to wait to get into memory even though plenty of memory is free. One single input queue for all partitions. Allocate a partition where the job fits in. Best Fit Worst Fit First Fit

8 Relocation Decide starting address when a program starts up
Copyright ©: Nahrstedt, Angrave, Abdelzaher Relocation Decide starting address when a program starts up Different jobs will run at different addresses When a program is linked, the linker must know at what address the program will begin in memory. Logical addresses, Virtual addresses Logical address space , range (0 to max) Physical addresses, Physical address space range (R+0 to R+max) for base value R. User program never sees the real physical addresses Memory-management unit (MMU) map virtual to physical addresses. Relocation register Mapping requires hardware (MMU) with the base register

9 Relocation Register Base Register BA Memory Physical Logical CPU
Copyright ©: Nahrstedt, Angrave, Abdelzaher Relocation Register Base Register Memory BA CPU Instruction Address Logical Address Physical Address + MA MA+BA

10 Question 1 - Protection Problem: Solution:
Copyright ©: Nahrstedt, Angrave, Abdelzaher Question 1 - Protection Problem: How to prevent a malicious process from writing or jumping into other user's or OS partitions Solution: Base bounds registers

11 Base Bounds Registers Bounds Register Base Register Memory Logical
Copyright ©: Nahrstedt, Angrave, Abdelzaher Base Bounds Registers Bounds Register Base Register Base Address Memory Logical Address LA Base Address BA CPU Address < + MA+BA Memory Address MA Physical Address PA Limit Address Fault

12 Copyright ©: Nahrstedt, Angrave, Abdelzaher
Contiguous Allocation and Variable Partitions: Bit Maps versus Linked Lists Part of memory with 5 processes, 3 holes tick marks show allocation units shaded regions are free Corresponding bit map

13 More on Memory Management with Linked Lists
Copyright ©: Nahrstedt, Angrave, Abdelzaher More on Memory Management with Linked Lists Four neighbor combinations for the terminating process X

14 Contiguous Variable Partition Allocation schemes
Copyright ©: Nahrstedt, Angrave, Abdelzaher Contiguous Variable Partition Allocation schemes Bitmap and link list Which one occupies more space? Depends on the individual memory allocation scenario. In most cases, bitmap usually needs more space. Which one is faster to reclaim freed space? On average, bitmap is faster because it just needs to set the corresponding bits Which one is faster to find a free hole? On average, linked list is faster because we can link all free holes together

15 Storage Placement Strategies
Copyright ©: Nahrstedt, Angrave, Abdelzaher Storage Placement Strategies Best fit Use smallest hole equal to or larger than the need Rationale? First fit Use the first hole large enough to meet the need Worst fit Use the largest available hole

16 Storage Placement Strategies
Copyright ©: Nahrstedt, Angrave, Abdelzaher Storage Placement Strategies Each strategy has problems Best fit Creates small holes that can’t be used Worst Fit Removes large holes, so large programs may not fit First Fit Creates average size holes

17 How Bad Is Fragmentation?
Copyright ©: Nahrstedt, Angrave, Abdelzaher How Bad Is Fragmentation? Statistical arguments - Random sizes First-fit Given N allocated blocks 0.5N blocks will be lost because of internal fragmentation Known as 50% RULE

18 Solve Fragmentation With Compaction
Copyright ©: Nahrstedt, Angrave, Abdelzaher Solve Fragmentation With Compaction Free 5 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 6 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 7 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 8 Monitor Job 7 Job 5 Job 3 Job 8 Job 6 Free 9 Monitor Job 7 Job 5 Job 3 Job 8 Job 6

19 Storage Management Problems
Copyright ©: Nahrstedt, Angrave, Abdelzaher Storage Management Problems Fixed partitions suffer from internal fragmentation Variable partitions suffer from external fragmentation Compaction suffers from overhead

20 Copyright ©: Nahrstedt, Angrave, Abdelzaher
Question What if there are more processes than what could fit into the memory?

21 Swapping Disk Monitor User Partition
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User Partition

22 Swapping Disk Monitor User Partition User 1
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition

23 Swapping Disk Monitor User Partition User 1 User 1
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition User 1

24 Swapping Disk Monitor User User 1 Partition User 1 User 2
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition User 1 User 2

25 Swapping Disk Monitor User User 1 Partition User 2 User 2
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition User 2 User 2

26 Swapping Disk Monitor User User 1 Partition User 2 User 2
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition User 2 User 2

27 Swapping Disk Monitor User User 1 Partition User 1 User 2
Copyright ©: Nahrstedt, Angrave, Abdelzaher Swapping Disk Monitor User 1 User Partition User 1 User 2

28 Paging Request 3 1 2 4 Disk Memory Virtual Memory Stored on Disk 5 6 7
8 Page Table VM Frame Real Memory Request Page 3

29 Paging 3 1 2 4 Disk Memory Virtual Memory Stored on Disk 5 6 7 8
Page Table VM Frame Real Memory Request Page 1

30 Paging 3 1 6 2 4 Disk Memory Virtual Memory Stored on Disk 5 7 8
Page Table VM Frame Real Memory Request Page 6

31 Paging 3 1 6 2 4 Disk Memory Virtual Memory Stored on Disk 5 7 8
Page Table VM Frame Real Memory Request Page 2

32 Paging Real Memory Store Virtual Memory Page 1 to disk
Copyright ©: Nahrstedt, Angrave, Abdelzaher Paging Real Memory Store Virtual Memory Page 1 to disk Request Address within Virtual Memory 8 Page Table 1 Cache VM Frame 3 1 2 1 2 3 6 3 2 4 4 1 2 3 4 Virtual Memory Stored on Disk Disk 1 2 3 4 5 6 7 8

33 Paging 3 1 6 2 4 Disk Cache Virtual Memory Stored on Disk 5 7 8
Copyright ©: Nahrstedt, Angrave, Abdelzaher Paging 3 1 6 2 4 Disk Cache Virtual Memory Stored on Disk 5 7 8 Page Table VM Frame Real Memory Load Virtual Memory Page 8 to cache

34 Page Mapping Hardware Virtual Memory Virtual Address (P,D) P
Page Table P D D P 1 Contents(P,D) P→F 1 1 Physical Memory 1 F D F Physical Address (F,D) Contents(F,D) D

35 Page Mapping Hardware Virtual Memory Virtual Address (004006)
Page Table 004 004 006 4 1 006 Contents(4006) 1 4→5 1 Physical Memory 1 005 006 005 Physical Address (F,D) Page size 1000 Number of Possible Virtual Pages 1000 Number of Page Frames 8 Contents(5006) 006

36 Page Fault Access a virtual page that is not mapped into any physical page A fault is triggered by hardware Page fault handler (in OS’s VM subsystem) Find if there is any free physical page available If no, evict some resident page to disk (swapping space) Allocate a free physical page Load the faulted virtual page to the prepared physical page Modify the page table

37 Paging Issues Page size is 2n Page table:
usually 512 bytes, 1 KB, 2 KB, 4 KB, or 8 KB E.g. 32 bit VM address may have 220 (1M) pages with 4k (212) bytes per page Page table: 220 page entries take 222 bytes (4 MB) page frames must map into real memory Page Table base register must be changed for context switch No external fragmentation; internal fragmentation on last page only

38 Virtual-to-Physical Lookups
Programs only know virtual addresses The page table can be extremely large Each virtual address must be translated May involve walking hierarchical page table Page table stored in memory So, each program memory access requires several actual memory accesses Solution: cache “active” part of page table TLB is an “associative memory”

39 Translation Lookaside Buffer (TLB)
Virtual address VPage # offset VPage# PPage# ... Real page table Miss VPage# PPage# ... . VPage# PPage# ... TLB Hit PPage # offset Physical address

40 TLB Function If a virtual address is presented to MMU, the hardware checks TLB by comparing all entries simultaneously (in parallel). If match is valid, the page is taken from TLB without going through page table. If match is not valid MMU detects miss and does a page table lookup. It then evicts one page out of TLB and replaces it with the new entry, so that next time that page is found in TLB.

41 Effective Access Time TLB lookup time = s time unit
Memory access time = m µs Assume: Page table needs single access TLB Hit ratio = h Effective access time: EAT = (m + s) h + (2m + s)(1 – h) EAT = 2m + s – m h This is how much access time is sped up when memory address is derefenced to physical address using an associative register.

42 Multilevel Page Tables
Since the page table can be very large, one solution is to make the page table hierarchical Divide the page number into An index into a table of second level page tables A page within a second level page table Advantage No need to keep all the page tables in memory all the time Only recently accessed memory’s mapping need to be kept in memory, rest can be fetched on demand

43 Multilevel Page Tables
Virtual address pte dir table offset . . . Directory . What does this buy us? Sparse address spaces and easier paging

44 Example Addressing on a Multilevel Page Table System
A logical address (on 32-bit x86 with 4k page size) is divided into A page number consisting of 20 bits A page offset consisting of 12 bits Divide the page number into A 10-bit page number A 10-bit page table index IA-64: VHPT walker only requires a single memory access to find a TLB-entry because it directly accesses the level 3 nodes (which are mapped into contiguous linear space). This is much more efficient than even the x86 walker (which must first access the root-level page directory to find the desired leaf-page table).

45 Multilevel Paging and Performance
Each level stored as separate table Assume: n-level page table Converting logical to physical address: This may take _____ memory accesses (TLB access is not a memory access)

46 Inverted Page Tables Main idea One PTE for each physical page frame
Hash (Vpage, pid) to Ppage# Trade off space for time Pros Small page table for large address space Cons Lookup is difficult Overhead of managing hash chains, etc Physical address Virtual address pid vpage offset k offset Still end up needing extra PTE “spots” because of hash collisions; typically 4x # of page spots pid vpage k n-1 Inverted page table

47 Paging Provide user with virtual memory that is as big as user needs
Copyright ©: Nahrstedt, Angrave, Abdelzaher Paging Provide user with virtual memory that is as big as user needs Store virtual memory on disk Cache parts of virtual memory being used in real memory Load and store cached virtual memory without user program intervention


Download ppt "Memory."

Similar presentations


Ads by Google