Download presentation
Presentation is loading. Please wait.
Published byEgbert Godfrey Allison Modified over 9 years ago
1
Unit - III Memory management and Virtual memory
2
Contents : Swapping Demand paging Hybrid System with swapping and demand paging Memory management requirements Memory partitioning Paging Segmentation Security Issues Hardware and control structures Operating system software Linux & Windows memory management
3
Process State Transition : 1 2 9 7 34 65 8 User Running Preempted Zombie Asleep in Memory Sleep, SwappedReady to Run, Swapped fork Created Ready to Run in Memory Kernel Running not enough mem (swapping system only) swap out swap in wakeup swap out wakeup enough mem sleep reschedule process preempt return to user system call, interrupt interrupt, interrupt return
4
Swapping : Fig. Swapping of processes
5
Swap Space : Swap device -> block device configuration Allocation temporary not like files. I/O faster for group of block Map data structure Follows first fit algorithm of continuous blocks
6
Allocating Swap Space : 1 10000 101 9900 151 9850 251 9750 Allocate 100 unit Allocate 50 unit Map AddressUnit
7
Allocate Swap Space : malloc( address_of_map, number_of_unit) for (every map entry) if (current map entry can fit requested units) if (requested units == number of units in entry) Delete entry from map else Adjust start address of entry return original address of entry return -1
8
Freeing Swap Space : 251 9750 50 unit free at 101 Map AddressUnit 101 50 251 9750 Case 1: Free resources fill a hole, but not contiguous to any resources in the map
9
Freeing Swap Space : 251 9750 50 unit free at 101 Map AddressUnit 101 50 251 9750 100 unit free at 1 1 150 251 9750 Case 2: Free resources fill a hole, and immediately precedes an entry in the map
10
Freeing Swap Space : 251 9750 50 unit free at 101 Allocate 200 unit Map AddressUnit 101 50 251 9750 100 unit free at 1 1 150 251 9750 1 150 451 9550 1 10000 300 unit free at 151 Case 3: Free resources fill a hole, and completely fills the gap between entries in the map
11
Swapping Process Out Memory Swap device Kernel swap out when it needs memory 1.When fork() called for allocate child process 2.When called for increase the size of process 3.When process become larger by growth of its stack 4.Previously swapped out process want to swap in but not enough memory
12
Swapping Process Out : 128k 401k : 66k 595k 65k 573k : 1k 432k 0 278k Virtual Addresses Swap device 684 688 Text Data Stack Physical Addresses Fig. Mapping process onto the swap device
13
Swapping Process In : 128k 401k : 66k 595k 65k 573k : 1k 432k 0 278k Virtual Addresses Swap device 684 688 Text Data Stack Physical Addresses Fig. Swapping a process into memory
14
Fork Swap There may not be enough memory when fork() called Child process swap out and “ready-to-run” Swap in when kernel schedule it
15
Expansion Swap Reserves enough space on the swap device Adjust the address translation mapping of the process (virtual address) Swaps out on newly allocated space in swapping device When the process swaps the process into memory, it will allocate physical memory according to new address translation map
16
Swapping Operation :
17
Demand paging : Not all page of process resides in memory Principle of locality Working set, page fault, page hit. Page replacement strategies (LRU/OPR) The kernel suspends the execution of the process until it reads the page into memory and makes it accessible to the process
18
Data Structure for Demand Paging : Page table entry Disk block descriptors Page frame data table (pfdata) Swap use table
19
Page Table Entry and Disk Block Descriptor Disk Block DescriptorPage Table Entry Page addressageCp/wrtmodrefvalprot Region Page Table Entry Page Table Type(swap,file,demand fill 0/1) Block numSwap device Disk Block Descriptor
20
Contains the physical address of page and the following bits: Valid: whether the page content legal Reference: whether the page is referenced recently Modify: whether the page content is modified copy on write: kernel must create a new copy when a process modifies its content (required for fork) Age: Age of the page Protection: Read/ write permission Page Table Entry
21
pftable entries : Page state- on swap device or executable file Reference count-no of processes referencing current page Logical device and block no Pointers to other pftable entries
22
Memory Management Requirements Memory management is intended to satisfy the following requirements: Relocation Protection Sharing Logical organization Physical organization
23
Relocation Relocation is the process of adjusting program addresses to match the actual physical addresses where the program resides when it executes Why is relocation needed? Programmer/translator don’t know which other programs will be memory resident when the program executes
24
Relocation Why is relocation needed? (continued) Active processes need to be able to be swapped in and out of main memory in order to maximize processor utilization Specifying that a process must be placed in the same memory region when it is swapped back in would be limiting Consequently it must be possible to adjust addresses whenever a program is loaded.
25
Protection Processes need to acquire permission to reference memory locations for reading or writing purposes Location of a program in main memory is unpredictable Memory references generated by a process must be checked at run time Mechanisms that support relocation also support protection
26
Sharing Advantageous to allow each process access to the same copy of the program rather than have their own separate copy Memory management must allow controlled access to shared areas of memory without compromising protection Mechanisms used to support relocation support sharing capabilities
27
Main memory is organized as a linear (1-D) address space consisting of a sequence of bytes or words. Programs aren’t necessarily organized this way Paging versus segmentation Programs are written in modules modules can be written and compiled independently different degrees of protection given to modules (read- only, execute-only) sharing on a module level corresponds to the user’s way of viewing the problem
28
Physical Organization Two-level memory for program storage: Disk (slow and cheap) & RAM (fast and more expensive) Main memory is volatile, disk isn’t User should not have to be responsible for organizing movement of code/data between the two levels.
29
Physical Organization Cannot leave the programmer with the responsibility to manage memory Memory available for a program plus its data may be insufficient overlaying allows various modules to be assigned the same region of memory but is time consuming to program Programmer does not know how much space will be available
30
Memory Partitioning Virtual memory management brings processes into main memory for execution by the processor involves virtual memory based on segmentation and paging Partitioned memory management used in several variations in some now-obsolete operating systems does not involve virtual memory
31
Fixed Partitioning Equal-size partitions The operating system can swap out a process if all partitions are full and no process is in the Ready or Running state
32
A program may be too big to fit in a partition program needs to be designed with the use of overlays Main memory utilization is inefficient any program, regardless of size, occupies an entire partition internal fragmentation wasted space due to the block of data loaded being smaller than the partition
33
Using unequal size partitions helps lessen the problems programs up to 16M can be accommodated without overlays partitions smaller than 8M allow smaller programs to be accommodated with less internal fragmentation
34
Memory Assignment
35
The number of partitions specified at system generation time limits the number of active processes in the system Small jobs will not utilize partition space efficiently Disadvantages :
36
Partitions are of variable length and number Process is allocated exactly as much memory as it requires This technique was used by IBM’s mainframe operating system. Dynamic Partitioning :
38
memory becomes more and more fragmented memory utilization declines External Fragmentation technique for overcoming external fragmentation OS shifts processes so that they are contiguous free memory is together in one block time consuming and wastes CPU time Compaction Dynamic Partitioning :
39
Placement Algorithms Best-fit chooses the block that is closest in size to the request First-fit begins to scan memory from the beginning and chooses the first available block that is large enough Next-fit begins to scan memory from the location of the last placement and chooses the next available block that is large enough
41
Buddy System Comprised of fixed and dynamic partitioning schemes Space available for allocation is treated as a single block Memory blocks are available of size 2 K words, L ≤ K ≤ U, where 2 L = smallest size block that is allocated 2 U = largest size block that is allocated; (generally 2 U is the size of the entire memory available for allocation)
42
Buddy System Example
44
Addresses reference to a memory location independent of the current assignment of data to memory Logical address is expressed as a location relative to some known point Relative actual location in main memory Physical or Absolute
45
Partition memory into equal fixed-size chunks that are relatively small Process is also divided into small fixed-size chunks of the same size Pages chunks of a process Frames available chunks of memory Paging :
47
Page Table Maintained by operating system for each process Contains the frame location for each page in the process Processor must know how to access the page table for the current process Used by processor to produce a physical address
48
Logical Addresses
50
Segmentation A program can be subdivided into segments mm ay vary in length tt here is a maximum length Addressing consists of two parts: ss egment number aa n offset Similar to dynamic partitioning Eliminates internal fragmentation
51
Logical-to-Physical Address Translation - Segmentation
52
If a process has not declared a portion of its memory to be sharable, then no other process should have access to the contents of that portion of memory If a process declares that a portion of memory may be shared by other designated processes then the security service of the OS must ensure that only the designated processes have access
53
Buffer Overflow Attacks Security threat related to memory management Also known as a buffer overrun Can occur when a process attempts to store data beyond the limits of a fixed-sized buffer One of the most prevalent and dangerous types of security attacks
54
Defending Against Buffer Overflows Prevention Detecting and aborting Countermeasure categories: Compile-time Defenses aim to harden programs to resist attacks in new programs Run-time Defenses aim to detect and abort attacks in existing programs
55
Translation Lookaside Buffer Each virtual memory reference can cause two physical memory accesses One to fetch the page table One to fetch the data To overcome this problem a high-speed cache is set up for page table entries Called a Translation Lookaside Buffer (TLB) Contains page table entries that have been most recently used
56
Translation Lookaside Buffer
57
Given a virtual address, processor examines the TLB If page table entry is present (TLB hit), the frame number is retrieved and the real address is formed If page table entry is not found in the TLB (TLB miss), the page number is used to index the process page table. First checks if page is already in main memory If not in main memory a page fault is issued The TLB is updated to include the new page entry
58
Translation Lookaside Buffer
61
Page Size Smaller page size, more pages required per process More pages per process means larger page tables Larger page tables means large portion of page tables in virtual memory Small page size, large number of pages will be found in main memory As time goes on during execution, the pages in memory will all contain portions of the process near recent references. Page faults low. Increased page size causes pages to contain locations further from any recent reference. Page faults rise.
62
Basic Replacement Algorithms FIFO LRU OPR Clock Policy Additional bit called a use bit When a page is first loaded in memory, the use bit is set to 1 When the page is referenced, the use bit is set to 1 When it is time to replace a page, the first frame encountered with the use bit set to 0 is replaced. During the search for replacement, each use bit set to 1 is changed to 0
63
Clock Policy
65
Resident Set Size Fixed-allocation Gives a process a fixed number of frames within which to execute When a page fault occurs, one of the pages of that process must be replaced Variable-allocation Number of pages allocated to a process varies over the lifetime of the process
66
Fixed Allocation, Local Replacement Scope Decide ahead of time the amount of allocation to give a process If allocation is too small, there will be a high page fault rate If allocation is too large there will be too few programs in main memory Processor idle time Swapping
67
Variable Allocation, Local Replacement Scope When new process added, allocate number of page frames based on application type, program request, or other criteria When page fault occurs, select page from among the resident set of the process that suffers the fault Reevaluate allocation from time to time
68
Variable Allocation, Global Replacement Scope Easiest to implement Adopted by many operating systems Operating system keeps list of free frames Free frame is added to resident set of process when a page fault occurs If no free frame, replaces one from another process
69
Cleaning Policy Demand cleaning A page is written out only when it has been selected for replacement Precleaning Modified pages are written before their frame is needed Pages are written out in batches
70
Cleaning Policy Best approach uses page buffering Replaced pages are placed in two lists Modified and unmodified Pages in the modified list are periodically written out in batches Pages in the unmodified list are either reclaimed if referenced again or lost when its frame is assigned to another page
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.