Download presentation
Presentation is loading. Please wait.
Published byLaurel Podd Modified over 9 years ago
1
Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002
2
Today’s Lecture Why memory management? –We want to give the user the illusion of a large, contiguous address space –But we also want to maximize the degree of multiprogramming and utilization of physical memory Topics: –Memory address binding –Logical vs. physical address spaces –Allocation –Memory paging –Segmentation Readings: Silberschatz et al., chapter 9
3
Bird’s eye view Symbolic names Logical addresses Physical addresses Binding But physical memory is shared among many processes Relocation Allocation Paging Segmentation
4
Typical programming steps test1.c, test2.c gcc –c test1.c; gcc –c test2.c test1.o, test2.o gcc –o test test.o test2.o –lmylib -lm test./test test, math library, standard C libraries C source compiler object code object code linker static library dynamic library executable loader memory image mylib,
5
Address Binding for User Programs Programs use symbolic addresses, e.g. “int i;” The compiler usually transforms symbolic addresses into relocatable addresses (e.g. “14 bytes from the beginning of the module”). The linkage editor or the loader transforms relocatable addresses into absolute (logical) addresses. Does the OS allow programs to run anywhere in memory? –YES: The OS must transform the absolute logical addresses into physical addresses. –NO: Logical and physical addresses are the same.
6
OS-supported dynamic binding Dynamic linking of system libraries (DLLs or shared libraries): –At link time, only a stub for library routines is linked –The rest of the library sits on disk until needed –The stub checks whether library has been loaded when called, and loads the DLL if necessary. Reduces memory usage by sharing between processes Requires OS support for protection
7
Logical vs. Physical Addresses When running a user program, the CPU deals with logical addresses. The memory controller sees a stream of physical addresses. The memory management unit (MMU) does the conversion. Logical View Physical View Process 1Process 2Process 3 Process 1 Process 2 Process 3 OS
8
Swapping and address binding
9
Sometimes the OS needs to swap out a process to make room for others. When time to swap the process back in, does it need to go back to the same location in physical memory? –Yes, if compile or load-time binding [e.g. embedded systems] –No, if dynamic binding is allowed [e.g. modern general-purpose systems]
10
Relocation CPU generates logical addresses in the range 0 to max. The MMU adds a base register (value= r ) to the logical addresses to get physical addresses. Physical address space is r +0 to r + max. e.g. with r =14000, a logical address of 346 gets mapped to the physical address 14346.
11
Relocation Register Example
12
Allocation When processes are created or swapped in, we must allocate space for them. The relocation register scheme only supports contiguous allocation: –OS stays resident in low memory –To protect other processes, use a limit register with the relocation register
13
Hardware support for contiguous allocation
14
Contiguous allocation Basic idea: keep list of available memory holes. When a new process arrives, find a hole for it. OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 2 OS process 5 process 9 process 2 process 9 process 10
15
Contiguous Allocation Strategies First fit: allocate from first hole large enough –Circular version: start search from last-allocated position Best fit: allocate from smallest hole possible Worst fit:allocate from largest hole possible Simulations show first fit and best fit are usually better than worst fit, but no clear winner.
16
Dealing with fragmentation in allocation Internal fragmentation: –Usually we round off a process’ usage into blocks. –The space wasted by this round off is internal fragmentation. External fragmentation: –Loading and releasing memory blocks can leave small holes spread over physical memory. –It is possible to have enough memory to satisfy a request even though it is broken up into too-small pieces. –This wasted space is called external fragmentation.
17
Dealing with external fragmentation Compaction: –Shuffle processes around in physical memory to create larger holes –Only works if processes are relocatable (dynamic address binding) –Time consuming and complex (I/O completion) Better solutions: –Paging: break logical address space into fixed-size pages and allow pages to be placed arbitrarily –Segmentation: split logical address space into a small set of variable-sized segments
18
Dealing with external fragmentation with paging The most common non-contiguous allocation method. –Physical memory broken into fixed-size frames. –Logical memory broken into same-size pages. –Logical addresses now consist of page number and offset. –Page number p indexes a page table. –Page table entry p contains the address f of page p in physical memory. –Requires hardware support. Can still have fragmentation?: internal yes, external no.
19
Paging address translation architecture
20
Logical and physical memory
21
Example 4-bit addresses 2 bits for page number 2 bits for offset 4-byte pages
22
How to select page size? Advantages of big pages: –Page table size is smaller –Frame allocation accounting is simpler Advantages of small pages: –Internal fragmentation is reduced
23
Process loading Split process logical address space into pages. Find enough free frames for the process’ pages. If necessary, swap out an old process. Find a sufficient number of free frames. Copy process pages into their designated frames. Fill in the page table.
24
Process loading
25
Page Table Implementation Page table maps logical memory page numbers to physical memory frame numbers. Additional information: protection bits –Readable, writable, valid/invalid, etc. If very small, page table can be stored in fast registers Usually, the page table is too big: –32 bit addresses, 4 KB pages: more than a million entries –Needs to be stored in main memory –Each memory access now requires two accesses. –1MB page table for a 1MB process is wasteful
26
Two-level page tables
27
To reduce size of the page table, use hierarchical structure. Page number field is split into a page number and an offset within that page. Now one memory access requires 3 accesses! page number page offset pipi p2p2 d 10 12
28
Page table caching Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache) Search for the value corresponding to a tag is parallel Page # ValueTag Frame #
29
Page table caching, again Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache) Search for the value corresponding to a tag is parallel Page # ValueTag Frame #
30
Paging with a TLB
31
TLB Usage The TLB stores a few page number / frame number pairs When CPU generates a logical address: –TLB is searched in parallel for the page number tag –If present, TLB hit Frame number is used immediately to access memory. –If not present, TLB miss Frame number has to be retrieved from table in memory. Retrieved frame number is used to access memory TLB is updated with the retrieved page / frame numbers –If TLB is full, have to select an entry for replacement The TLB must be flushed and restored on each context switch.
32
Effective access times Average memory access time depends on TLB hit ratio. Suppose: –TLB search takes 20 ns –Memory accesses take 100 ns –Single-level page table –80% hit rate Effective access time (EAT) = 0.8 x ( 20 ns + 100 ns ) + 0.2 x ( 20 ns + 100 ns + 100 ns ) = 140 ns Paging overhead is 40% (100 ns vs. 140 ns)
33
Segmentation An alternative way of breaking up the logical address space User programs are broken into logical segments –Main program, function1, global heap, etc. Each process has several variable-sized memory segments Hardware provides base and limit registers for each segment Packing problem simpler (smaller chunks) but external fragmentation still occurs –Solution: combine paging and segmentation
34
Segmented Address Space
35
Logical and physical address spaces 1 3 2 4 1 4 2 3 user spacephysical memory space
36
Hardware support for segments
37
Segmentation example
38
What have we learned? Address binding schemes –Compile time, link time, load time, dynamic Logical vs. physical address spaces Allocation schemes –Contiguous –Paging Translation Lookaside Buffer (TLB) page table cache –Segmented memory architecture
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.