Presentation is loading. Please wait.

Presentation is loading. Please wait.

W4118 Operating Systems Instructor: Junfeng Yang.

Similar presentations


Presentation on theme: "W4118 Operating Systems Instructor: Junfeng Yang."— Presentation transcript:

1 W4118 Operating Systems Instructor: Junfeng Yang

2 Logistics  Homework 4 out, due 3:09pm 3/26  You will add a new scheduling policy to Linux  This is likely the most difficult programming assignment of this course, so start today 1

3 Last lecture: Advanced Scheduling  Advanced Scheduling Concepts  Multilevel Queue Scheduling  Multiprocessor Scheduling  Real-time Scheduling  Linux Scheduling  Goals  Data structures: runqueue, task_struct  Procedures: schedule(), scheduler_tick() 2

4 Today: Memory Management  Dynamic memory allocation  Stack and Heap  Intro to Memory management 3

5 Motivation for Dynamic Memory Allocation  Static (compile time) allocation is not possible for all data  Recursive calls  Runtime input from user  Complicated data structure ……  Two ways of dynamic allocation  Stack allocation Restricted, but simple and efficient  Heap allocation More general, but less efficient More difficult to implement 4

6 Stack Organization  Stack: memory is freed in opposite order from allocation  Last in First out (LIFO)  When useful?  Memory usage pattern follows LIFO  Example Function call frames  Implementation  Pointer separating allocated and free space  Allocate: increment pointer  Free: decrement pointer  Advantage  Simple and efficient  Keeps all free space continuous  Disadvantage  Not for data structures 5

7 Heap Organization  Heap: allocate from random locations  Memory consists of allocated area and free area (or holes)  When is it useful?  Allocate and free are unpredictable  Complex data structures new in C++, malloc in C, kmalloc in Linux kernel  Advantage: general, works on arbitrary allocation and free patterns  Disadvantage: end up with small chunks of free space 6

8 Fragmentation  Small trunks of free memory, too small for future allocations  External: visible to system  Internal: visible to process (e.g. if allocate at some granularity)  Goal  Reduce number of holes  Keep holes large  Stack: all free space is together as one big hole 7

9 Heap Implementation  Data structure: linked list of free blocks  free list: chains free blocks together  Allocation  Choose block large enough for request  Update free list  Free  Add block back to list  Merge adjacent free blocks 8

10 Best vs. First vs. Worst  Best fit  Search the whole list on each allocation  Choose the smallest block that can satisfy request  Can stop search if exact match found  First fit  Choose first block that can satisfy request  Worst fit  Chose largest block (most leftover space) Which is better? 9

11 Examples  Best algorithm: depends on sequence of requests  Example: free list has 2 blocks of size 20 and 15 bytes  Allocation requests: 10 then 20  Allocation requests: 8, 12, then 12 10

12 Comparison of Allocation Strategies  Best fit  Tends to leave very large holes and very small holes  Disadvantage: very small holes may be useless  First fit:  Tends to leave “average” size holes  Advantage: faster than best fit  Worst fit:  Simulation shows that worst fit is worse in terms of storage utilization 11

13 Today: Memory Management  Dynamic memory allocation  Stack  Heap  Allocation strategies  Intro to Memory management 12

14 Motivation for Memory Management  Simple uniprogramming with a single segment per process  Early batch systems  Early personal computers  Disadvantages  Only one process can run a time  Process can destroy OS OS User Process 13

15 Multiple Processes in Memory Process A Process B 14

16 Multiprogramming Wish-list  Sharing  multiple processes coexist in main memory  Transparency  Processes not aware that memory is shared  Run regardless of number and/or locations of processes  Protection  Cannot corrupt OS or other processes  Privacy: cannot read data of other processes  Efficiency: should have reasonable performance  Purpose of sharing is to increase efficiency  Do not waste CPU or memory resources 15

17 Relocation Transparency  relocation  Process can run anywhere in memory (can’t predict in advance)  How? 16

18 Background  Compiler compiles source files into object files  Linker links object files and system libraries into a program  Loader loads program and dynamically-linked library (from disk) into memory and placed within a process for it to be run 17

19 Dynamic Linking  Linking postponed until execution time  Implementation  Small piece of code, stub, used to locate the appropriate memory-resident library routine  Stub replaces itself with the address of the routine, and executes the routine  Advantage  useful for libraries: updating the library updates all programs that use it (shared libraries)  Saves space  Disadvantage  Difficult to manage dependencies  Runtime failures 18

20 Dynamic Loading  Routine is not loaded until it is called  Better memory-space utilization; unused routine is never loaded  Useful when large amounts of code are needed to handle infrequently occurring cases 19

21 20

22 When to Relocate?  Compile time  Hardwire physical location at compile time (absolute code)  Problem Each program needs to be written with others in mind Not really transparent  Load time  Compiled code relocatable (relocatable code)  How? All addresses relative to a start address. Change start address to relocate  Problem Once loaded, can’t change or move  Execution time  Can move during execution  This is what’s generally done, but need hardware support 21

23 Relocation at Execution Time  Map program-generated address to hardware address dynamically at every reference  MMU: Memory Management Unit Controlled by OS  Program: logical (virtual) address  Hardware: physical (real) addresses  Address space: each process’s view of memory 22 CPUMMU MEMORY Logical Addresses Physical Addresses

24 Address Spaces OS AS1 AS2 AS3 23 Logical viewPhysical view

25 Hardware Support  Two operating modes  Privileged (protected, kernel) mode: when OS runs When trap into OS (system calls, interrupts, exceptions) Allows certain instructions to be executed Allows OS to access all of physical memory  User mode: when user processes run Performs translation of logical address to physical address Protects OS and other processes physical memory  How to implement protection?  Base register: start physical location of address space  Limit register: last valid address the process may access Appears to have private memory of size equal to limit register 24

26 Implementation  Translation on every memory access  Compare logical address to limit register If greater, generate exception  Add base register to logical address to generate physical address 25 CPU base limit <= limit? + Logical Addresses Physical Addresses exception no

27 Managing Processes with Base and Limit  Context switch  Protection requirement  User process cannot change base and limit register  User process cannot change to privileged mode 26

28 Pros and Cons of Base and Limit  Continuous allocation: each process is in a contiguous memory block  Advantages  Supports dynamic relocation of address space  Supports protection across multiple spaces  Cheap: few registers and little logic  Fast: add and compare can be done in parallel  Disadvantages  Each process must be allocated contiguously in real memory Fragmentation: cannot allocate a new process Solution: swapping (next)  Must allocate memory that may not be used  No sharing: cannot share limited parts of address space e.g. cannot shared code with private data 27

29 Swapping  A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution  Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images  Roll out, roll in – swapping variant used for priority- based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed  Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped  Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) 28

30 Schematic View of Swapping


Download ppt "W4118 Operating Systems Instructor: Junfeng Yang."

Similar presentations


Ads by Google