Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 9 - MEMORY MANAGEMENT

Similar presentations


Presentation on theme: "CHAPTER 9 - MEMORY MANAGEMENT"— Presentation transcript:

1 CHAPTER 9 - MEMORY MANAGEMENT
CGS Operating System Concepts UCF, Spring 2004

2 OVERVIEW (Part 1) Address Binding Logical vs. Physical Address Space
Early Techniques Dynamic Loading Overlays Swapping Contiguous Allocation Methods Partitions Fragmentation Dynamic Linking

3 OVERVIEW (Part 2) Paging Segmentation Segmentation with Paging
Page Tables Frame Tables Segmentation Segment Tables Segmentation with Paging

4 BACKGROUND Program must be brought into memory and turned into a process for it to be executed. New or Job Queue contains collection of jobs on disk that are waiting to be brought into memory for execution. Single Program Systems User memory consists of a single partition Task was to make the most use of that limited space Multiprogramming Systems Must keep multiple programs in memory simultaneously Task is to manage & share this limited space among multiple programs

5 ADDRESS BINDING In order to load a program, instructions and associated data must be mapped or “bound” to specific locations in memory Address binding can happen at three different stages. Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. Load time: Must generate relocatable code if memory location is not known at compile time. Execution or Run time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Special hardware (e.g., MMUs) required for address mapping.

6 ADDRESS BINDING TIMES User programs go through several steps before being executed.

7 SINGLE PROGRAM, COMPILE TIME BINDING

8 SINGLE PROGRAM, LOAD TIME BINDING

9 CONTIGUOUS ALLOCATION, SINGLE PARTITION SYSTEMS
Simplest form of memory management to implement Characteristic of early computer systems Contiguous: All portions of memory associated with a process are kept together - not broken into parts Single Partition: User area consists of one large address space, all of which is allocated to a single user process Hardware support required: Fence Register, Mode Bit Assuming fixed OS size, can bind at compile time Assuming variable OS size, must bind at load time

10 PROBLEMS WITH SINGLE PARTITIONS
Lots of wasted space (varies by program size) Generally low memory utilization Poor CPU & I/O Utilization (not multiprogramming) Program size limited to size of memory Can use overlays and dynamic loading to obtain better performance (squeeze larger program into smaller space) Dynamic linking possible but not really applicable Not generally used with multiprogramming If multiprogramming, must swap programs in and out of memory High overhead if overlays or swapping used.

11 DYNAMIC LOADING Program divided into subroutines or procedures
Subroutine not loaded until it is called Better memory-space utilization; unused routines are never loaded. Useful when large amounts of code are needed to handle infrequently occurring cases. Example: Payroll varied by days in month

12 OVERLAYS Keep in memory only those instructions and data that are needed at any given time. Needed when process is larger than amount of memory allocated to it. Examples: 2-Pass Assembler, Multi-Pass Compiler

13 SWAPPING A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. Modified versions of swapping are found on many systems

14 SWAPPING CONTINUED

15 CONTIGUOUS ALLOCATION, MULTIPLE PARTITION SYSTEMS
Supports true multiprogramming More than one program resident in memory at same time Each program / process assigned a partition in memory Partitions may be of different sizes Sizes may be fixed at system start up or vary over time Additional hardware support required Base and Limit Registers Partition registers (fixed partitions) OS must keep track of each partition in some type of table: e.g., Partition #, location, size, status Difficult to bind at compile time - don’t always know which partition will be assigned

16 MULTIPLE FIXED PARTITIONS, COMPILE TIME BINDING

17 MULTIPLE FIXED PARTITIONS, LOAD TIME BINDING

18 DYNAMIC LINKING With static linking, all called routines linked into single large program before loading. Linking postponed until execution time. Small piece of code, stub, used to locate the appropriate memory-resident library routine. Stub replaces itself with the address of the routine, and executes the routine. Operating system needed to check if routine is in processes’ memory address. Particularly useful for libraries

19 FIXED PARTITION ISSUES
Can use overlays or dynamic loading within a single partition to improve performance Can use dynamic linking to share libraries or common routines among resident processes Swapping possible but not required for multiprogramming. May use for controlling process mix. May establish separate job queues for each partition based on memory requirements To bind at compile time, must know the partition to be assigned before hand Size of OS can vary by changing size/location of lowest partition

20 FIXED PARTITIONS (cont.)
Advantages: Allows multiprogramming Compared with single partition systems - Less wasted space Better memory utilization Better processor and I/O utilization Context Switching faster than Swapping Disadvantages: Internal Fragmentation (unused memory inside partitions) Number of concurrent processes limited by number of partitions Program limited to size of largest partition

21 MULTIPLE DYNAMIC PARTITIONS
Multiple-partition allocation Hole – block of available memory; holes of various size are scattered throughout memory. When a process arrives, it is allocated memory from a hole large enough to accommodate it. Operating system maintains information about: allocated partitions free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

22 MULTIPLE DYNAMIC PARTITIONS, LOAD TIME BINDING

23 DYNAMIC PARTITION ISSUES
Memory allocated dynamically with each job getting a partition of size >= its program length. Can’t bind at compile time. Partition locations always changing over time. Only load or run time binding. Can’t bind at load time if you want to compact. Can use overlays or dynamic loading within a single partition to improve performance Can use dynamic linking to share libraries or common routines among resident processes Swapping used primarily as tool for compaction (roll-in, roll-out). May also used for controlling process mix. Size of OS can vary by relocating lowest partition higher in memory How to select hole/partition for next job?

24 DYNAMIC STORAGE ALLOCATION PROBLEM
How to satisfy a request of size n from a list of free holes. First-fit: Allocate the first hole that is big enough. Best-fit: Allocate the smallest hole that is big enough Must search entire list/table, unless ordered by size Produces the smallest leftover hole. Worst-fit: Allocate the largest hole Must also search entire list/table. Produces the largest leftover hole. First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

25 DYNAMIC PARTITIONS (cont.)
Advantages: Allows multiprogramming Compared with multiple fixed partition systems - Less wasted space assuming compaction Better memory utilization Higher degree of multiprogramming possible (more partitions) Better processor and I/O utilization Larger programs possible (can be as large as user area) Disadvantages: External Fragmentation can occur Number of concurrent processes is unpredictable and limited an a given time by number of partitions but can vary More overhead than fixed partitions, more complicated

26 FRAGMENTATION REVIEW Internal fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. Reduce internal fragmentation by using dynamic partitions or assigning smallest useable fixed partition to a job External Fragmentation – enough memory space exists to satisfy a job request, but memory space is not contiguous - unused space between partitions Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block. Compaction is possible only if relocation is dynamic, and is done at execution time - uses run time binding

27 LOGICAL VS. PHYSICAL MEMORY
Distinction necessary with run-time or execution-time binding A logical address space is bound to a separate physical address space Logical address – generated by the CPU from instructions; also referred to as virtual address. Physical address – address seen by the memory address register Logical and physical addresses are the same in compile-time and load-time address-binding schemes Logical (virtual) and physical addresses differ in execution-time address-binding scheme.

28 MEMORY MANAGEMENT UNIT
A hardware device called a memory management unit (MMU) is required to map virtual to physical addresses on the fly. Base register replaced by relocation register In MMU scheme, the value in the relocation register is added to every logical address generated by a user process before is sent to memory address register The user program deals with logical addresses; it never sees the real physical addresses.

29 MULTIPLE FIXED PARTITIONS, RUN TIME BINDING

30 MULTIPLE DYNAMIC PARTITIONS, RUN TIME BINDING

31 MEMORY MANAGEMENT METHODS

32 SEGMENTATION Memory-management method that supports user view of memory. A program is a collection of segments. A segment is a logical unit in that program such as a: main program, sub-procedure or sub-routine, function, local variables space, global variables space, common block, stack, symbol table, arrays, etc.

33 LOGICAL VIEW OF SEGMENTATION
1 4 2 3 1 2 3 4 user / programmer space physical memory space

34 SEGEMENTATION ADDRESS SPACES
Logical address space for a program is a collection of variable length segments. Each segment has a unique name or number Logical address consists of a two tuple: <segment-number, offset> A segment table maps two-dimensional physical addresses; each table entry has: base: contains the starting physical address where a segment resides in physical memory. limit: specifies the logical length of the segment.

35 SEGMENTATION ADDRESSING

36 OS SUPPORT FOR SEGMENTATION
A process’ PCB contains a pointer to the segment table. During context switch this pointer loaded into STBR. Segment-table base register (STBR) points to a processes segment table’s location in memory. A process’ PCB contains a value which indicates the length of the segment table. During a context switch this value loaded into STLR. Segment-table length register (STLR) indicates number of segments used by a program Depending on hardware support in CPU Can load small # of segment table entries in special registers Otherwise, each data request requires two memory accesses: 1 for segment table entry, 1 for data

37 SEGMENTATION EXAMPLE

38 ANOTHER EXAMPLE

39 SEGMENTATION ISSUES Cannot use compile or load time binding. Program code must be relocatable: Run time binding - MMU calculates physical address from logical address two-tuple <segment #, offset> Requires special compiler to divide program into logical segments create relocatable code In addition, OS must support segmentation Long term scheduler allocates memory for all segments, usually on a first-fit or best-fit basis: Segments vary in length so memory allocation is example of dynamic storage-allocation problem Can result in external fragmentation Use compaction to combine fragments into larger, more usable space.

40 MEMORY PROTECTION WITH SEGMENTATION
A segment-number is legal if # < STLR. Segment bases and limits (in segment table) used to ensure memory accesses stay within a segment Each entry in a segment table can include a validation bit and privilege information: validation bit = 1  legal segment validation bit = 0  illegal segment May also include read/write/execute privileges

41 SHARING SEGMENTS

42 SHARING SEGMENTS (cont.)
A segment is shared if it is pointed to by segment table entries for different processes To share a segment: segment code can be self-referencing but not self-modifying (aka, pure code, reentrant code) For self referencing segments: processes may use different segment #s for a shared segment if references are indirect (e.g., PC-relative) processes must use same segment # for shared segment if references are direct and incorporate the segment # itself.

43 SEGMENTATION (cont.) Advantages:
Programs no longer require contiguous memory Easier to load larger programs by breaking into parts No internal fragmentation Easier for programmer to think of segments/objects than memory as linear array. Enforced access to segments (e.g., read-only segments) Further improvement in memory utilization possible with: Segment Sharing Dynamic Loading of Segments

44 SEGMENTATION (cont.) Disadvantages: External Fragmentation
Requires additional hardware / software to implement Segmentation-aware compilers Segment Tables & Registers MMU capable of mapping logical to physical addresses Associative Memory for faster segment table lookup. Can require two memory accesses if segment table is large Segment table lookup and address calculation can slow down execution Entire program must reside in memory assuming no sharing or dynamic loading of pages length of program <= size of memory

45 PAGING Memory-management method that views program as “pages” or code segments of fixed and equal size. Unlike segmentation, no functional division of program’s logical address space Memory divided into “frames” with fixed size equal to that of pages. Page size = Frame Size OS maintains list of free frames Page/Frame size is power of 2, between 512 bytes and 16 Megabytes Most systems use page sizes between 4K and 8K bytes

46 PAGING (cont.) Program pages are loaded into free frames wherever they may exist in memory. For basic paging, must have sufficient number of free frames to load entire program before execution can begin. Set up a page table to help translate logical to physical addresses page table relates logical page number to physical frame

47 LOGICAL-TO-PHYSICAL PAGE MAPPING

48 <page-number, offset>
PAGING ADDRESS SPACES Logical address consists of a two tuple: <page-number, offset> Address generated by CPU is divided into: Page number (p) – used as an index into a page table which contains base address of each page (frame) in physical memory. Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit.

49 PAGING ADDRESSING

50 PAGING EXAMPLE In this example, each page is 4 bytes.
To find location of “n” in memory: n is in page 3 page three is loaded in frame 2 with offset of 1 Calculate physical addr. frame * size + offset 2 * 4 + 1 9 (physical location)

51 OS SUPPORT FOR PAGING A process’ PCB contains a pointer to the page table. During context switch this pointer loaded into PTBR. Page-table base register (PTBR) points to a processes segment table’s location in memory. Page tables can be of fixed or variable length Fixed length page tables use valid/invalid bit to indicate if logical page reference is invalid For variable length page tables, a page-table length register (PTLR) indicates number of pages used by a program Logical page number < PTLR for valid memory references

52 DETERMINING PAGE/OFFSET
Make page size a power of number system used Decimal Examples: Page size = 100 bytes (102), for Logical Address 12573 Page number is 125, Offset is 73 Page size = 10 bytes (101), for Logical Address 12573 Page number is 1257, Offset is 3 Similar approach using binary memory addresses Page size = 8 bytes (23), For logical address “ ” Page number is 1101 (decimal 13) Offset is 011 (decimal 3)

53 ANOTHER PAGING EXAMPLE

54 MEMORY ACCESS TIMES WITH PAGING
Depending on hardware support in CPU Can load small # of page table entries in a fast-lookup hardware cache called associative registers or translation look-aside buffers (TLBs) Otherwise, each data request requires two memory accesses: 1 for page table entry, 1 for data

55 PAGING HARDWARE WITH TLB

56 MEMORY ACCESS TIMES (cont.)
TLB Lookup =  time units Memory Access Time = m time units Hit ratio () is percentage of times that a page number is found in the TLB Effective Access Time (EAT) EAT =  ( + m) + (1 – )( + m + m) Example: TLB lookup = 20 nanoseconds Memory Access = 100 nanoseconds Hit Ratio = 80% or .8 EAT = .8 ( ) + .2 ( ) 140 nanoseconds per access

57 PAGING ISSUES Cannot use compile or load time binding. Program code must be relocatable: Run time binding - MMU calculates physical address from logical address two-tuple <page #, offset> Requires special compiler to divide program into pages segments create relocatable code In addition, OS must support paging Long term scheduler must allocate memory for all pages Pages are fixed in length so memory allocation is similar to static multiple partitioning method Can result in internal fragmentation if end of program does not require full page.

58 MEMORY PROTECTION WITH PAGING
A page number is legal if Page # < PTLR or valid/invalid bit is “valid” Offsets must be <= page size serves as limit check

59 SHARING PAGES

60 SHARING PAGES (cont.) Shared code Private code and data
One copy of read-only (reentrant) code can be shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes. Have same page numbers Private code and data Each process keeps a separate copy of non-shared code and/or data. The pages for private code and data can appear anywhere in the logical address space.

61 TWO-LEVEL PAGE-TABLE SCHEME

62 TWO-LEVEL PAGING (cont.)
A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits. a page offset consisting of 12 bits. Since the page table is paged, the page number is further divided into: a 10-bit page number. a 10-bit page offset. Thus, a logical address is as follows: where pi is an index into the outer page table for the process, and p2 is the displacement within the inner page table pointed to by the outer page table entry. page number page offset pi p2 d 10 10 12

63 TWO-LEVEL, 32-BIT ADDRESS TRANSLATION

64 Multilevel Paging and Performance
Since each level is stored as a separate table in memory, covering a logical address to a physical one may take four memory accesses. Even though time needed for one memory access is quintupled, caching permits performance to remain reasonable. Cache hit rate of 98 percent yields: EAT = 0.98 x x 520 128 nanoseconds. which is only a 28 percent slowdown in memory access time.

65 PAGING (cont.) Advantages:
Programs no longer require contiguous memory Easier to load larger programs by breaking into small parts of equal size No compaction required No external fragmentation However, may have unused frames Further improvement in memory utilization possible with Page sharing Bottom Line: Higher memory utilization and therefore a greater degree of multiprogramming

66 PAGING (cont.) Disadvantages: Some internal fragmentation
On average equals 1/2 page size * # of processes Requires additional hardware / software to implement Paging-aware compilers Page Tables & Registers MMU capable of mapping logical to physical addresses Associative Memory (TLB) for faster page table lookup. Can require two or more memory accesses if page table is large or required entry not in TLB Page table lookup and address calculation can slow down execution Entire program must reside in memory length of program <= size of memory


Download ppt "CHAPTER 9 - MEMORY MANAGEMENT"

Similar presentations


Ads by Google