Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Memory: Mono an multiprogramming.

Similar presentations


Presentation on theme: "© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Memory: Mono an multiprogramming."— Presentation transcript:

1 © Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Memory: Mono an multiprogramming

2 Memory Hierarchy  Very fast access, expensive, limited, volatile  Registers  Cache  RAM  Disk (internal hard disks)  Backup media (DVD, tape, USB memory, … )  Relatively inexpensive, slower access, persistent, abundant © Janice Regan, CMPT 300, May 2007 1

3 Memory manager  Efficiently keeps track of  The parts of available memory in use  The parts of available memory not in use  The particular memory used by each process  Manages  Allocation/de-allocation of memory  Bases its management on a model of memory understandable to the programmer  (address space / memory map) © Janice Regan, CMPT 300, May 2007 2

4 3 Simple system:  Mono-programming with direct memory access  One process in memory (RAM only) at a time  One process completes then next process runs  All programs are loaded beginning at the same known address in physical memory

5 © Janice Regan, CMPT 300, May 2007 4 Direct access  Mono-programming  When we compile a program, we need to refer to addresses in instructions inside the compiled file.  All programs use the same set of addresses (the actual addresses of the physical memory). This means that only 1 program can run at a time or address conflicts occur  Program can access all available memory (possible for a process to damage the OS)

6 Example  Multiple programs at once  Addressing is a problem © Janice Regan, CMPT 300, May 2007 5 Junp to 3 Jmp to 7 5 Junp to 3 Jmp to 7 0 7 6 4 5 3 2 1 0 7 6 4 5 3 2 1 0 7 6 4 5 3 2 1 8 F E C D B A 9

7 © Janice Regan, CMPT 300, May 2007 6 Simple system:  Mono-programming with direct memory access  One process in memory (RAM only) at a time  One process completes then next process runs  All programs are loaded beginning at the same known address in physical memory (blue on next slide)

8 © Janice Regan, CMPT 300, May 2007 7 Memory use: Direct Access Reserved for OS Read only memory Address 0 Increasing addresses Reserved for OS Reserved, device drivers Read only memory (BIOS …) Reserved for OS process data Stack/heap RAM Older mainframes Embedded systemsEarly PC’s

9 Variations: direct access  Even with such a simple model of the system there are many possible variations  The OS may occupy the beginning or the end of the block of RAM (illustrated in the memory map)  The process may be one large image, or the image may be broken into regions for different purposes, data, process instructions, dynamic memory for execution of the program  The OS may be stored in one part of the RAM, or may be separated into more than one section, say drivers, and the remaining OS © Janice Regan, CMPT 300, May 2007 8

10 Swapping: the simplest system  Even in the simple one partition, direct access, system it is possible to multi-program if we use swapping.  When we consider programs A and B sharing this system swapping requires that  the entire contents of memory for A is saved to disk  the context of program A is saved  The context of program B is loaded  The memory image of B (previously saved) is loaded into memory © Janice Regan, CMPT 300, May 2007 9

11 Problems with direct access  No security, possible to access all of memory, even the memory holding the OS  Can corrupt the OS  Prevents dividing the memory between multiple programs  Can not run many programs ‘at the same time’.  Swapping helps but is a slow way to service multiple programs © Janice Regan, CMPT 300, May 2007 10

12 © Janice Regan, CMPT 300, May 2007 11 Other approaches  Any more complicated (realistic) approaches require relocation of process images at locations other the the beginning of a single partition and require the use of memory abstraction.  Multiprocessing with fixed partitioning  Multiprocessing with dynamic partitioning  Multiprocessing with paging  Multiprocessing with segmentation  Multiprocessing with virtual memory and paging  Multiprocessing with virtual memory and segmentation

13 Memory abstraction  Create a ‘model of memory’ easy for the programmer to understand. It should  Allow multiple processes to share memory.  Make sure each process can use only the area of memory allocated to it (protect other areas of memory from the process)  Remove the need for using absolute hardware addresses  Make it easier to transparently use different types of memory (RAM, ROM, registers, …) © Janice Regan, CMPT 300, May 2007 12

14 © Janice Regan, CMPT 300, May 2007 13 Address Space: memory map  An address space maps all available memory in the system to a single contiguous list of addresses  A memory map shows us which parts of address space (which list of memory addresses) are reserved for the OS and which parts of address space are in use by particular processes

15 Address spaces  The abstraction of an address space can be used to describe physical memory directly. This is referred to as a physical address space  We will use the term ‘address space’ to refer to a logical address space, an abstraction providing a list of addresses that transparently access all memory in the system (can access more than RAM) © Janice Regan, CMPT 300, May 2007 14

16 Logical / Physical  Addresses used and generated by the CPU (to interface with the logical memory model the programmer is using) are logical addresses  Addresses used in the memory address register to actually access a particular location in physical memory are physical addresses. © Janice Regan, CMPT 300, May 2007 15

17 Basic Idea: A map of memory  The user of the system needs to be able to refer to parts of available memory to  Access or save information  Refer to a particular location within memory  The memory model (address space) may refer transparently to any combination of  RAM / ROM memory  Registers  Extended portions of memory stored on disk © Janice Regan, CMPT 300, May 2007 16

18 © Janice Regan, CMPT 300, May 2007 17 Address space / Memory map Reserved for OS Read only memory Address 0 Increasing addresses Reserved for OS

19 © Janice Regan, CMPT 300, May 2007 18 Other approaches  Any more complicated (realistic) approaches require relocation of process images at locations other the beginning of a single partition.  Multiprocessing with fixed partitioning Fixed size partitions (equal or varying sizes) Variable size partitions  Multiprocessing with dynamic partitioning  Multiprocessing with paging  Multiprocessing with segmentation  Multiprocessing with virtual memory and paging  Multiprocessing with virtual memory and segmentation

20 © Janice Regan, CMPT 300, May 2007 19 Memory map: multiple processes Operating system P1 P2 P3 p4

21 © Janice Regan, CMPT 300, May 2007 20 Types of addresses  Logical address: a reference to a memory location in a logical address space  Relative address: address is expressed as a location relative to some point (reference address) in physical memory  Physical address, absolute address in physical memory

22 Address Binding: Mapping logical to physical  Compile time: Create absolute addresses that refer directly to physical memory. You must know where you program will begin in physical memory before compiling  Load time: Compiler generates relocatable code. The starting address is determined at load time.  Execution time: The process can be loaded at different memory locations at different times during its execution. Need special hardware © Janice Regan, CMPT 300, May 2007 21

23 © Janice Regan, CMPT 300, May 2007 22 Relocation: at load time  Programs are compiled with relative addresses (often relative to location 0 of physical memory)  When the program is loaded all addresses in the code are incremented by an offset to the start of the memory block being used. (Done by the loader at the time the program is loaded)  When the program is executed the addresses in the program all refer to the location at which the program is running  Can we relocate again to a different location after swapping?  Not generally, there are problems (next slide)

24 © Janice Regan, CMPT 300, May 2007 23 Relocation: example / problem  After the first instruction in the program has been executed relocation may break code  Consider a pointer (one of the variables in the code)  When it is set by the code it points to a particular location in memory. The address in the pointer is the physical address of the desired variable.  When the code is relocated, the start address of the code is moved from A to B.  All the addresses that are part of the instructions are updated by the loader to be consistent with the new location of the process image in physical memory  However, when the pointer is used it points to the physical location of the variable relative to A, not relative to B and the code breaks  Fix using dynamic relocation special hardware MMU

25 © Janice Regan, CMPT 300, May 2007 24 Relocation: at execution time  Programs are compiled with relative addresses (often relative to location 0 of physical memory)  When the program is loaded can use a base register to indicate where the program starts in physical memory  Each relative address encountered in the program must be shifted to its physical address by adding the base register address before the instruction containing the address is executed. Addresses are managed by the MMU

26 © Janice Regan, CMPT 300, May 2007 25 Protection  All parts of a process image should be protected from all other processes.  Can use an additional register, the limit register to help do this.  The size of the address is placed in the limit register  Each time an address is calculated (base register added to relative address) the relative address is compared to the value in the limit register If the address is >the value in the limit register then the OS knows the memory that would be accessed belongs to another process, and can prevent the access

27 © Janice Regan, CMPT 300, May 2007 26 Other approaches  Now lets consider other approaches, no swapping  Multiprocessing with fixed partitioning Fixed size partitions (equal or varying sizes) Variable size partitions  Multiprocessing with dynamic partitioning  Multiprocessing with paging  Multiprocessing with segmentation  Multiprocessing with virtual memory and paging  Multiprocessing with virtual memory and segmentation

28 © Janice Regan, CMPT 300, May 2007 27 Multiprocessing: fixed partitions  Partitions of equal size  Any program whose image size is <= to the partition size can be loaded into any available partition  Internal fragmentation will occur since not all processes will fill their partitions  Some processes may be too larger for the partition size: requires overlays, programmer breaks program into units that can fit in one partition and alternates them in his partition Operating system P1 P2 P3 P4 p5

29 © Janice Regan, CMPT 300, May 2007 28 Multiprocessing: fixed partitions  Partitions of different sizes  Any program whose image size is <= to the partition size of a partition can be loaded into that partition  Now need to choose how to assign a process to a particular partition  It makes sense to place a process in the smallest partition into which it will fit, either Into partition with next largest physical size Into partition with next largest physical size which is presently available

30 © Janice Regan, CMPT 300, May 2007 29 Fixed partitions, varying sizes Operating system P1 P2 P3 p4

31 © Janice Regan, CMPT 300, May 2007 30 Fixed partitions: variable sizes  Each process is placed in a queue for the smallest partition that it fits into  FIFO queue for each size process  Advantages:  minimum internal fragmentation, optimal use of memory in each partition  Always know which partition will be used by a particular process, make relocation easier  Know by size of process which partitions offset should be used (with 1 partition of each size), so compiler can produce absolute addresses or OS (loader) can take care of relative addressing

32 © Janice Regan, CMPT 300, May 2007 31 Fixed partitions: variable sizes  Each process is placed in a queue for the smallest partition that it fits into  FIFO queue for each size process  Disadvantages:  If there are no jobs in a particular size range a partition is left idle. Jobs smaller the ideal size range for the partition could execute and use the idle slot improving the systems efficiency  Job can only swap in or out of one partition.  Partitions may be idle while jobs are waiting

33 © Janice Regan, CMPT 300, May 2007 32 Fixed partitions varying size: 1 FIFO Operating system P1 P2 P3 p4

34 © Janice Regan, CMPT 300, May 2007 33 Fixed partitions varying size: 1 FIFO  All processes put in a single FIFO queue  When a process is to be loaded it is loaded into the smallest available partition that will hold it  OS (loader) must be able to take care of relative addressing once a partition has been chosen (cannot swap to another partition later)  Advantage:  no idle partitions when small jobs are waiting

35 © Janice Regan, CMPT 300, May 2007 34 Fixed partitions varying size: 1 FIFO  All processes are placed in a single FIFO queue  When a process is to be loaded it is loaded into the smallest available partition that will hold it  Disadvantages:  Increased internal fragmentation  Large jobs may need to wait for small jobs using the large partition to finish  Compiler cannot produce absolute addresses  Load time relocation, can only swap into or out of a single partition (even if >1 with that size)

36 © Janice Regan, CMPT 300, May 2007 35 Multiprocessing: dynamic partitions  All processes are placed in a single FIFO queue  When a process is to be loaded it is loaded into a portion of memory the correct size to hold it (minimal internal fragmentation <1 block/file)  The portion of memory may start at any location in memory  Leads to external fragmentation. Can compact but slow and not generally done  Must know how much memory is needed so the ‘right’ part of memory can be chosen  How do we choose the right part of memory? Need a replacement algorithm

37 © Janice Regan, CMPT 300, May 2007 36 Multiprocessing: dynamic partitions OS P1 P2 1 2 OS P1 OS P2 P3 P1 OS P3 OS P4 P3 2 OS P3 P2

38 © Janice Regan, CMPT 300, May 2007 37 Memory management of dynamic partitions  Like working with disk can keep track of free and used partitions (blocks) using a bit map or a linked list.  Same advantages and disadvantages

39 Dynamic partitions:  Over time fragmentation of free space will occur and  Can compact: expensive  Can used structure linked lists holding free partition information. Sort links in list with respect to address (in address space) Use doubly linked list allows for easier aggregation of adjacent free partitions (works better for buddy system) that will be in adjacent links © Janice Regan, CMPT 300, May 2007 38

40 © Janice Regan, CMPT 300, May 2007 39 dynamic partitions: placement  Best fit: choose from among the available blocks of memory the smallest one the new process will fit into  worst performer, must search entire memory space each time  Leaves smallest free blocks between processes  First fit: scan memory from address 0 and find the first block of memory big enough to hold the new process  usually best and fastest  Leads to less frequent allocation at end of memory space (a chance for larger blocks to become available there)

41 © Janice Regan, CMPT 300, May 2007 40 dynamic partitions: placement  Next fit: begin to scan memory from the location of the last processes placement and find the first block of memory big enough to hold the new process  Not as good a first fit, breaks up potential large blocks at end of memory space  Quick-fit: (works particularly well with buddy system)  Keep separate lists of available partitions in different size ranges.  To find a partition go to the list with the next highest partition size and take the next partition

42 © Janice Regan, CMPT 300, May 2007 41 dynamic partitions: ‘buddy’ system  Memory blocks are available in groups of size 2 N, from 2 L to 2 U  When all the memory is available will get a partition of the correct 2 N for the first process. Only one partition in list  If the process fits in half of the memory break the memory into two partitions  Put both halves in list of partitions  Consider the first of the pair.  Repeat this step until a partition the correct size is produced  When you have an area of the correct size put your process into the first of the pair of partitions P1

43 © Janice Regan, CMPT 300, May 2007 42 dynamic partitions: ‘buddy’ system  Subsequently find the first block in the list of partitions >= to the size needed  repeat previous two steps  When multiple processes finish and leave a possible larger block free aggregate pairs into single blocks  Process P3 finishes, the empty partition between P3 and P2 is added to the freed partition to create one twice as large  This recreates a partition that was previously split  The larger partition replaces the one previously in the list P2 P1P3P4

44 © Janice Regan, CMPT 300, May 2007 43 Memory management Contiguous allocation  Fixed partitions  More internal fragmentation  Large processes must be built with overlays  Dynamic partitions  External fragmentation  More complicated OS To minimize external fragmentation To deal with dynamic relocation (can also be resolved with additional hardware, MMU)

45 © Janice Regan, CMPT 300, May 2007 44 How many processes?  CPU utilization will increase with more processes  How many processes are needed before we can assume most CPU time will be used  Simplest approximation  For better approximation use queuing theory

46 © Janice Regan, CMPT 300, May 2007 45 Swapping  No matter how many processes we have in memory at one time (within reason) there will be times when all processes are blocked  When all processes are blocked the CPU is idle. This is inefficient.  To avoid this inefficiency we can introduce swapping.  When all processes are blocked move one or more out of memory and move a process ready to execute into the memory that is made available  Later the process swapped out will be reloaded into memory  Need to be able to dynamically relocate the process  Next time the process is loaded into memory it may not be loaded at the same location

47 © Janice Regan, CMPT 300, May 2007 46 dynamic partitions: swapping OS P2 1 2 OS P1 OS P3 P1 OS P2 P3 P5 P4 P2 P5 P1 P5 P2 P3

48 © Janice Regan, CMPT 300, May 2007 47 Maintenance of free memory  Bitmap: one bit for each unit of memory  Fast lookup for fixed partitions (can use 1 bit per partition)  Expensive for dynamic partitions (use one bit per word) Large table, expensive to search, difficult to search for large blocks of words  Linked list: one link for each partition (each contiguous group of words) that are free  Link contains location and length  More links as disk becomes fragmented  How to aggregate links when they contain adjacent partitions? Use a doubly linked list

49 © Janice Regan, CMPT 300, May 2007 48 Choosing partition size  Static partition  Must be chosen when system is configured  Based on user statistics for similar systems  Dynamic partitions  Size of memory image of process But what about dynamically allocated variables Leave space in each processes image for dynamic variables Have a shared heap area for dynamic allocation from running processes. (is heap swapped?)

50 © Janice Regan, CMPT 300, May 2007 49 Growing data segments Operating system P1 P2 P3 p4 Code segment Data segment All local Variables + Dynamically Allocated variables

51 © Janice Regan, CMPT 300, May 2007 50 Partition size  What about dynamically allocated variables  Leave space in each processes image for dynamic variables What happens if you need more than the allocated heap space? Waste space if you do not use your dynamic allocation

52 © Janice Regan, CMPT 300, May 2007 51 Growing data segments Operating system P1 P2 P3 Shared heap Code segment Stack


Download ppt "© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Memory: Mono an multiprogramming."

Similar presentations


Ads by Google