Download presentation
Presentation is loading. Please wait.
Published byCuthbert Casey Modified over 9 years ago
1
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
2
very small very large very fast very slow volatile non-volatile
3
Locations in memory are identified by memory addresses When compiled, programs consist of relocatable code Other compiled modules also consist of relocatable code symbolic addresses in source code relative addresses in object code
4
At load time, any additional libraries also consist of relocatable code physical addresses generated by loader
5
At run time, memory addresses of all object files are mapped to a single memory space in physical memory
6
A pair of base and limit registers define the logical address space Also known as relocation registers
7
Variable-length or dynamic partitions: When a new process enters the system, the process is allocated to a single contiguous block The operating system maintains a list of allocated partitions and free partitions OS Process 5 Process 8 Process 2 OS Process 5 Process 2 OS Process 5 Process 2 Process 9 OS Process 5 Process 9 Process 2 Process 1
8
How can we place new process P i in memory? First-fit algorithm: allocate the first free block that’s large enough to accommodate P i Best-fit algorithm: allocate the smallest free block that’s large enough to accommodate P i Next-fit algorithm: allocate the next free block, searching from last allocated block Worst-fit algorithm: allocate the largest free block that’s large enough to accommodate P i
9
Memory is wasted due to fragmentation, which can cause performance issues Internal fragmentation is wasted memory within a partition or process memory External fragmentation can reduce the number of runnable processes ▪ Total memory space exists to satisfy a memory request, but memory is not contiguous OS Process 5 Process 8 Process 2 Process 3 Process 6 Process 12 Process 7 Process 9
10
A noncontiguous memory allocation scheme avoids the external fragmentation problem Slice up physical memory into fixed-sized blocks called frames ▪ Sizes typically range from 2 9 to 2 14 Slice up logical memory into fixed-sized blocks called pages Allocate pages into frames ▪ Note that frame size equals page size
11
When a process of size n pages is ready to run, operating system finds n free frames The OS keeps track of pages via a page table main memory process P i == in use == free
12
Page tables map logical memory addresses to physical memory addresses
13
Example process P i needs 16MB of logical memory Page size is 4MB Logical memory is mapped to a 32MB physical memory Frame size is 4MB binary 0 ==> 000000 4 ==> 000100 8 ==> 001000 12 ==> 001100 16 ==> 010000 20 ==> 010100 24 ==> 011000 28 ==> 011100
15
Every logical address is sliced into two distinct components: Page number (p): used as an index into the page table to obtain the base physical memory address Page offset (d): combined with the base address to identify the physical memory address page number page offset p d
16
Covers a logical address space of size 2 m with page size 2 n page number page offset p d (m – n)(n)
18
1 2 The page table is in main memory Every memory access request actually requires two memory accesses:
19
Use page table caching at the hardware level to speed address translation Hardware-level translation look-aside buffer (TLB)
20
Given: Memory access time is 100 nanoseconds TLB access time is 20 nanoseconds TLB hit ratio is 80% The effective memory-access time (EMAT) is 0.80 x 120 ns + 0.20 x 220 ns = 140 ns What is the effective memory-access time given a hit ratio of 99%? 50%?
21
For large page tables, use multiple page table levels Slice up the logical address into multiple page indicators
22
Processes in the ready queue have memory images waiting on disk Processes are swapped in and out of memory Can suffer from slow data transfer times
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.