Download presentation
Presentation is loading. Please wait.
Published byΜελπομένη Παπαδόπουλος Modified over 6 years ago
1
Operating Systems course 2006/2007 Virtual memory
Igor Radovanović
2
Quiz 1: Memory manager Memory placement:
Single or multiple (multiprogramming)? If multiple processes in memory: Divide memory in equal sizes or different ones? Why do we need memory division? After memory is divided: Can we change sizes of partitions or not Dynamic adaptation to process size Do processes run in specific partitions or any? Should the process be in contiguous blocks or not? Is the running process entirely in main memory or not? Which process (or its part) to replace once memory is full?
3
Quiz 2: Memory management
Why do we need primary memory? Why do not we use secondary memory only? Do you think that the next breakthrough in computer systems design would be to make processors that can directly fetch instructions and data from the secondary memory? If YES, …….think again Since the main memory is becoming huge and low-cost, will there be need for memory management in the future? Cache
4
(OS) MM requirements Multi-programming Non-contiguous storage
(parts of) programs stored in main memory ONLY when used by CPU No memory waste No waiting for MM by CPU No added complexity No additional hardware required Unlimited memory size
5
Reminder: Memory manager strategies
Fetch strategies Demand fetch strategy Anticipatory fetch strategy Static placement strategies FIFO Best-fit First-fit Worst-fit Replacement strategies LRU, LFU, other
6
Memory manager implementation
How is Memory manager implemented? Software + special-purpose hardware
7
Improved Memory management
Q: What needs to be done such that programs do not have to be stored contiguously, and not the entire program, but only its parts need to be stored in the memory? A: Split memory into segments and try to fit parts of the program into those segments If segments are of different size: segmentation the same size: paging Physical memory blocks: frames Logical memory blocks: pages
8
Memory management -recent systems-
Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation Segmented/Demand Paged Allocation
9
Paging memory allocation
Processes divided into pages of equal size Works well if those pages are equal to the memory block size (page frame) and the disk’s section size (sectors) Advantages of non-contiguous memory allocation: An empty page frame may be used by any process Compaction scheme is not required No external and almost no internal fragmentation Problems: Mechanism needed to keep track of page locations Consequence: Enlarging the size and complexity of the OS SW
10
Non-contiguous allocation -an example-
Proc1 size=350 lines Internal fragmentation The number of free pages left=6 Any process of more than 600 lines has to wait until the Proc1 ends its execution Any process of more than 1000 lines cannot fit into memory at all! Disadvantage: Entire process must be stored in memory during its execution
11
Memory manager using paging requires 3 tables to track a Proc’s pages
1. Process table (PT) - 2 entries for each active process Size of a process & memory location of its page map table Dynamic – grows/shrinks as processes are loaded/completed 1 table per job 2. Page map table (PMT) - 1 entry per page Page number & corresponding page frame memory address Page numbers are sequential (Page 0, Page 1 …) 1 table per process Memory map table (MMT) Location & free/busy status 1 table for all the page frames
12
Process table
13
Page Map Table Contains page number and its corresponding page frame memory address Ω - referenced page is not in main memory Reading PMT doubles the number of memory access Ideally, PMT should not be accessed Process page number Page Frame Number 8 1 10 2 5 3 11 4 Ω
14
PMT implementation: Associative memory (AM)
Used to speed up process execution Associative Memory Page Table No empty entries (saves memory space) Takes advantage of temporal locality and spatial locality Uses a key-field to address data Expensive to implement
15
PMT implementation: translation-lookaside buffer (TLB)
Special registers where page - page frame mapping is done TLB resides in cache The full-page table is in main memory When a page is first translated to a page frame, the map is read from main memory to TLB On subsequent references to the page: Search for a certain page done in parallel through PMTs on the one hand and TLB on the other No waste of time if page is not in TLB Disadvantage: High cost of complex hardware required for parallel searches
16
TLB requirements TLB is the most performance-critical component of a modern microprocessor Designed in hardware Must be quickly accessible to shorten a clock cycle large enough to provide avoiding of PMT usage Contradictory requirements
17
TLB implementation Separate TLB’s for instruction fetches and data loads 2-level TLB (similar to cache) Example: AMD Opteron: 40-entry L1 instruction & data entry L2 instruction & data Take scheduler time slices into account What if TLB’s are of the same size? When should TLB be updated? Does this require support of a software in OS?
18
Displacement Displacement (offset) of a line -- how far away a line is from the beginning of its page Used to locate that line within its page frame Relative factor For example, lines 0, 100, 200, and 300 are first lines for pages 0, 1, 2, and 3 respectively, so each has displacement of zero Address of a given program line: Division made in hardware Memory Management Unit (MMU)
19
Address resolution Translate a process’s space address into a physical address Page frame no. Process1 (page size =512) Main memory Line no. Instruction/data 001 BEGIN 025 LOAD R1,518 518 3792 512 1 1024 2 1536 Proc1-Page1 3 2048 4 2560 Proc1-Page0 5 3072 6 3584 7 displacement PMT for Process1 Page number Page frame number 5 1 3
20
Memory management Recent systems
Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation Segmented/Demand Paged Allocation
21
Demand paging The first allocation scheme that removed the restriction of having the entire process stored in memory Bring a page into memory only when it is needed, so less I/O & memory needed Faster response Made virtual memory widely available Gives appearance of an almost infinite physical memory Takes advantage that programs are written sequentially so not all pages are necessary at once. For example: User-written error handling modules Mutually exclusive modules Certain program options are either mutually exclusive or not always accessible Many tables assigned a fixed amount of address space even though only a fraction of a table is actually used
22
Demand paging (cnt’d) How and when pages are passed (or “swapped”) depends on predefined policies that determine when to make room for needed pages and how to do so Policy that selects page to be removed is crucial to system efficiency Selection of algorithm is critical The one with the lowest page-fault rate is the most attractive First-in first-out (FIFO) policy best page to remove is one that has been in memory the longest Least-recently-used (LRU) policy chooses pages least recently accessed to be swapped out Most recently used (MRU) policy Least frequently used (LFU) policy
23
Page Map Table Extra fields with respect to PMT in static paging Status bit indicates if page is currently in memory or not Referenced bit indicates if page has been referenced recently Used by LRU to determine which pages should be swapped out Modified bit indicates if page contents have been altered Used to determine if page must be rewritten to secondary storage when it is swapped out Note: Swapping: copying the page from secondary to main memory
24
Instruction Processing Algorithm
Hardware components (MMU) do: generate the (logical) address of the required page find the page number determine if the page is already in memory Start processing instruction Generate data address Compute page number if (page is in memory) then get data and finish instruction advance to next instruction return to 1 else generate page interrupt call page fault handler fi Page in the secondary storage; OS takes over
25
Page fault handler Determines whether there are empty page frames in memory If not, it must decide which page to swap out This depends on predefined policy for page removal When page is swapped out, 3 tables to be updated: PMT tables of two tasks (1 in,1 out) Memory Map Table Problem with swapping: thrashing
26
Thrashing When a process is spending more time paging than executing
When it happens? When the number of frames for the low priority process falls below the min number required by computer architecture Pages in active use are replaced by other pages in active use Happens with increased degree of multiprogramming Solution: Local replacement algorithms (not completely) Provide the process with as many frames as it needs Use locality model (working set) Note: this model also used in caching
27
Thrashing -an example-
for (j=1; j<100; ++j) { k:=j*j; m:=a*j; printf(“\n%d %d %d”, j, k, m); } printf(“\n”); Page 0 Swapping between these two pages Page 1
28
Dynamic allocation paging: -Working-set model-
A set of active pages large enough to avoid trashing Use a parameter ∆ to determine a working-set window Page reference window … …. ∆= ∆=10 WS (t1)={1,2,5,6,7} WS (t2)={3,4} D-the total demand for frames WSSi- the working set size For thrashing will occur; m-the total number of available frames
29
A reminder: Page fault handler
Determines whether there are empty page frames in memory If not, it must decide which page to swap out This depends on predefined policy for page removal Good policy: more memory hits - improved efficiency When page is swapped out 3 tables to be updated: PMT tables of two tasks (1 in 1 out) Memory Map Table Problem with swapping: thrashing
30
FIFO policy How each page requested is swapped into 2 available page frames using FIFO. When program is ready to be processed all 4 pages are on secondary storage. Throughout program, 11 page requests are issued. When program calls a page that isn’t already in memory, a page interrupt is issued (shown by *) page interrupts result.
31
FIFO policy -analogy- Dresser drawer filled with sweaters Situation:
The drawer is full and you have just bought a new sweater Must put one sweater away to make space for the new one Q: Which sweater to remove, oldest or the least recently used one? Case 1: Remove the oldest one What if it turn out that that is your most treasured possession? Case 2: Remove the least recently used What if once-a-year occasion demands its appearance soon? Case 3: Buy a new wardrobe?!
32
LRU policy Throughout the program 11 page requests are issued, but they cause only 8 page interrupts Efficiency slightly better than FIFO The most widely used static replacement algorithm
33
LRU implementation in hardware
Problem: Keeping track of of the backward distance of every loaded page for every process is difficult Page table requires entry for the virtual time of the last reference Costly since another page table memory write must be introduce Virtual time must be maintained Solution: Approximate LRU behavior Check reference bits Periodically set to 0 Introduce shift register with the most significant bit like the reference bit and periodically shift the content to the right
34
Pros & Cons of Demand Paging
First scheme in which a process was no longer constrained by the size of physical memory (virtual memory) Uses memory more efficiently than previous schemes because sections of a process used seldom or not at all aren’t loaded into memory unless specifically requested Increased overhead caused by tables and page interrupts
35
Evaluation of Paging
36
Memory management Recent systems
Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation Segmented/Demand Paged Allocation
37
REMINDER: Thrashing -an example-
for (j=1; j<100; ++j) { k:=j*j; m:=a*j; printf(“\n%d %d %d”, j, k, m); } printf(“\n”); Page 0 Swapping between these two pages Page 1
38
Segmented memory allocation
Based on common practice by programmers of structuring their programs in modules (logical groupings of code) A segment is a logical unit such as: main program, subroutine, procedure, function, local variables, global variables, common block, stack, symbol table, or array Main memory is not divided into page frames because size of each segment is different Memory is allocated dynamically Address specifying a segment name and an offset This in contrast to paging where user specifies only a single address
39
Segment Map Table (SMT)
Memory Manager needs to track segments in memory: Process Table (PT) lists every process (one for the whole system) Segment Map Table (SMT) lists details about each segment (one for each process) Memory Map Table (MMT) monitors allocation of main memory (one for the whole system) Each segment is numbered and a Segment Map Table (SMT) is generated for each process Contains segment numbers, their lengths, access rights, status, and (when each is loaded into memory) its location in memory
40
SMT -an example- Two dimensional addressing scheme:
Address (segment number, displacement) Segment no Size Status Access Memory address 350 Y E 4000 1 200 7000 2 100 N ----- Main Program Operating System 3000 Empty 4000 349 Main Program Y=in memory N=not in memory E=Execute only Subroutine A Other Programs 7000 199 Subroutine A Subroutine B Other Programs 99
41
Segments vs. Pages Pages: Disadvantage: Segments: Advantages:
Physical units invisible to the user’s program Fixed size Disadvantage: MM operating without any a priory knowledge regarding relationship among pages Segments: Logical units visible to the user’s program Variable size Advantages: Less swapping Disadvantages: External fragmentation Memory compaction Secondary storage handling
42
Memory management Recent systems
Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation Segmented/Demand Paged Allocation
43
Segmented/Demand Paged Memory Allocation
Evolved from combination of segmentation and demand paging Logical benefits of segmentation Physical benefits of paging Subdivides each segment into pages of equal size, smaller than most segments, and more easily manipulated than whole segments Eliminates many problems of segmentation because it uses fixed length pages Disadvantage: Overhead required for extra tables Time required to reference both segment table and page table
44
Segmented/Demand Paged Memory Allocation -an example-
Main memory Page Map Tables Process0/Segment0 Segment Map Tables Process0 Operating system Proc0/Seg1/Pg0 Proc0/Seg0/Pg1 Proc0/Seg0/Pg0 Proc0/Seg2/Pg0 Proc0/Seg1/Pg1 Proc0/Seg2/Pg1 Active Process Table Page No Page Frame No 1 Segment No Pointer to Page Map Table 1 2 3 Process No Pointer to segment table 1 2 . N Process0/Segment1 Page No Page Frame No 1 Process0/Segment2 Process1 Page No Page Frame No 1 Segment No Pointer to Page Map Table 1 2 3 Process1/Segmen0 Page No Page Frame No 1 Interaction among Process Table, Segment Map Table, Page Map Table and Main Memory
45
Managing insufficient memory
Consolidating several dispersed single holes into a larger hole Memory compaction Swapping Overlay
46
Swapping Evicting one of the processes (with blocked thread) resident in the main memory to the secondary storage Process manager - Memory manager cooperation Make use of the relocation hardware Issue: where to keep swapped processes on the disk? Involve Device manager In time-sharing systems combined with scheduling Scheduling performed at the end of each time quantum Processes swapped during the execution of other processes
47
Swapping (cnt’d) Overhead: R=[S/D + the time to reacquire primary memory] R – the number of disk blocks S – the number of units of primary memory D - the unit size of the disk block Resource waist SxT T – the amount of time during which the process is blocked Not always easy to estimate Note: If T<R then process begins requesting memory before it is completely swapped out Swapping strategy: Calculate SxT’ (space-time product), T’ – the amount of time the process has held S units of memory replace the one with the largest product
48
Swapping vs. Memory Compaction
Advantages: More efficient Guarantees creation of a hole large enough to accommodate a request Evict as many processes as necessary Affects only a small number of processes (not always) Fewer memory access Can be used with both fixed and variable partitions Disadvantage: Requires access to secondary memory (time consuming) Overlapped with execution of other processes though
49
Virtual Memory (VM) Gives appearance that programs are being completely loaded in the main memory during their entire processing time Shared programs and subroutines are loaded “on demand,” reducing storage requirements of main memory Spatial locality The ideal operation of VM manager: Knowledge of the exact set of addresses that need to be loaded into the main memory before they are referenced Keeps exactly those addresses in the main memory that are currently referenced IMPLEMETATION: VM is implemented through demand paging and segmentation schemes
50
VM (cnt’d)
51
Advantages of VM Works well in a multiprogramming environment because most programs spend a lot of time waiting Process’s size is no longer restricted to the size of main memory (or the free space within main memory) Memory is used more efficiently Allows an unlimited amount of multiprogramming Eliminates external fragmentation when used with paging and eliminates internal fragmentation when used with segmentation Allows a program to be loaded multiple times occupying a different memory location each time Allows sharing of code and data Facilitates dynamic linking of program segments
52
Disadvantages of VM Increased processor hardware costs
Increased overhead for handling paging interrupts Increased software complexity to prevent thrashing
53
Cache memory Based on “the principle of locality”
CPU memory device Cache Memory bus CPU memory device bus Based on “the principle of locality” A high-speed memory that speeds up the CPU’s access to information Increases the performance No need to contend for the bus
54
Cache memory -improved efficiency-
Cache hit ratio h: Average main memory access time:
55
Address translation in Pentium
Virtual address is 48 bits long (2 bits for protection) Segment table has two parts of 4 K entries each (Local/Global Descriptor Table)
56
The buddy system (in Linux)
Trade-off between fixed and variable partitions A list with 5 entries (in Linux 10)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.