Presentation is loading. Please wait.

Presentation is loading. Please wait.

OPERATING SYSTEM CONCEPTS AND PRACTISE

Similar presentations


Presentation on theme: "OPERATING SYSTEM CONCEPTS AND PRACTISE"— Presentation transcript:

1 OPERATING SYSTEM CONCEPTS AND PRACTISE

2 VIRTUAL MEMORY Introduction
Virtual memory is a technique that allows the execution of processes that are not completely in memory. Only part of the program needs to be in memory for execution. Logical address space can therefore be much larger than physical address space. Allows address spaces to be shared by several processes. Allows for more efficient process creation.

3 Virtual address spaces that include holes are known as sparse address spaces
Allows files and memory to be shared by two or more processes through page sharing Benefits System libraries can be shared by several processes virtual memory enables processes to share memory allow pages to be shared during process creation with the fork() system call

4 Shared Library Using Virtual Memory

5 Im plementation of Virtual Memory
Demand Paging Bring a page into memory only when it is needed Less I/O needed Less memory needed Faster response More users Page is needed  reference to it invalid reference  abort not-in-memory  bring to memory

6 Transfer of a Paged Memory to Contiguous Disk Space

7 Initially valid–invalid but is set to 0 on all entries
The pager brings only those necessary pages into memory. With each page table entry a valid–invalid bit is associated (1  in-memory, 0  not-in-memory) Initially valid–invalid but is set to 0 on all entries

8 Page Table When Some Pages Are Not in Main Memory

9 Access to a page marked invalid causes a page-fault trap.
The procedure for handling this page fault is: We check an internal table If the reference was invalid, we terminate the process. We find a free frame We schedule a disk operation to read the desired page into the newly allocated frame. When the disk read is complete, we modify the internal table We restart the instruction that was interrupted by the trap

10 Steps in Handling a Page Fault

11 The hardware to support demand paging
Page table. This table has the ability to mark an entry invalid through a valid-invalid bit or special value of protection bits. Secondary memory. This memory holds those pages that are not present in main memory. A crucial requirement for demand paging is the need to be able to restart any instruction after a page fault.

12 Performance of Demand Paging
Let p be the probability of a page fault Page Fault Rate 0  p  1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead X Page fault overhead.

13 Copy-on-Write Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied These shared pages are marked as copy-on- write pages Copy On Write allows more efficient process creation as only modified pages are copied

14 Page Replacement Prevent over-allocation of memory by modifying page-fault service routine to include page replacement Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory

15 Need For Page Replacement

16 Basic Page Replacement
If no frame is free, we find one that is not currently being used and free it. Steps to modify the page fault service. Find the location of the desired page on the disk. Find a free frame: If there is a free frame, use it. If there is no free frame, use a page-replacement algorithm to select a victim frame Write the victim frame to the disk; change the page and frame tables accordingly. Read the desired page into the newly freed frame; change the page and frame tables. Restart the user process.

17 Page Replacement

18 Two major problems to implement demand paging:
frame-allocation algorithm page-replacement algorithm. If we have multiple processes in memory, we must decide how many frames to allocate to each process. Select the frames that are to be replaced. Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string

19 First-In-First-Out (FIFO) Algorithm
Associates with each page the time when that page was brought into memory.

20 Optimal Page Replacement
Replace page that will not be used for longest period of time guarantees the lowest possible page fault rate for a fixed number of frames. The optimal page-replacement algorithm is difficult to implement optimal replacement is much better than a FIFO algorithm

21 LRU Page Replacement LRU replacement associates with each page the time of that page's last use. An LRU page-replacement algorithm may require substantial hardware assistance Two implementations are feasible: Counters Stack

22 LRU Page Replacement

23 Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to determine which are to change Stack implementation – keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed No search for replacement

24 LRU Approximation Algorithms
Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace the one which is 0 (if one exists). We do not know the order, however. Second chance Algorithm Need reference bit If the value is 0, we proceed to replace this page Reference bit is set to 1, we give the page a second chance and move on to select the next FIFO page. When a page gets a second chance, its reference bit is cleared arrival time is reset to the current time.

25 Second-Chance (clock) Page-Replacement Algorithm

26 Enhanced Second-Chance Algorithm
Consider the reference bit and modified bit as an ordered pair. (0, 0) neither recently used nor modified— best page to replace (0, 1) not recently used but modified—not quite as good. (1., 0) recently used but clean (1,1) recently used and modified—probably will be used again soon

27 Counting-Based Page Replacement
Keep a counter of the number of references that have been made to each page The least frequently used (LFU) page- replacement algorithm requires that the page with the smallest count be replaced. MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used

28 Allocation of Frames Allocate at least a minimum number of frames.
We must have enough frames to hold all the different pages The minimum number of frames per process is defined by the architecture. The maximum number is defined by the amount of available physical memory

29

30 Global vs. Local Allocation
Global replacement allows a process to select a replacement frame from the set of all frames, even if that frame is currently allocated to some other process One problem with a global replacement algorithm is that a process cannot control its own page-fault rate. Local Replacement – each process selects from only its own set of allocated frames The set of pages in memory for a process is affected by the paging behavior of only that process.

31 Thrashing If a process does not have “enough” pages, the page-fault rate is very high. This may lead to: suspend the process execution page out remaining process and introduce a swap in and swap out level. Process does not have enough frame to support pages This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing.

32 Cause of Thrashing The operating system monitors CPU utilization
we increase the degree of multiprogramming by introducing a new process to the system. If a process enters a new phase in its execution and needs more frames. It starts faulting and taking frames away from other processes. As processes wait for the paging device, CPU utilization decreases. Limit the effects of thrashing by using a local replacement algorithm

33 With local replacement
if one process starts thrashing, it cannot steal frames from another process If processes are thrashing, they will be in the queue for the paging device Provide a process with as many frames as it needs, a working strategy technique is used. Define the locality model of process execution. The locality model states that, as a process executes, it moves from locality to locality. A locality is a set of pages that are actively used together . A program is generally composed of several different localities, which may overlap.

34 Working-Set Model The working-set model is based on the assumption of locality. Parameter A to define the working-set window. If a page is in active use, it will be in the working set. If it is no longer being used, it will drop from the working set A time units after its last reference. Thus, the working set is an approximation of the program's locality. If A = 10 memory references, then the working set at time t\ is {1, 2, 5,6, 7). By time h, the working set has changed to {3, 4}. WSSi (working set of Process Pi) = total number of pages referenced in the most recent A (varies in time)

35 Compute the working-set size, WSSi each process in the system,
if A too small will not encompass entire locality if A too large will encompass several localities if A =   will encompass entire program Compute the working-set size, WSSi each process in the system, D =  WSSi  total demand frames Where D working set is the total demand for frames. Each process is actively using the pages in its working set. If the total demand is greater than the total number of available frames (D > m), thrashing will occur,

36 Working Set model

37 Page-Fault Frequency Scheme
To control the page-fault rate. When page fault rate is high, we know that the process needs more frames. If the page-fault rate is too low, then the process may have too many frames. If the actual page-fault rate exceeds the upper limit, we allocate the process another frame, if the page-fault rate falls below the lower limit, we remove a frame from the process. Suspend a process, If the page-fault rate increases.

38 Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory Memory mapping only through a specific system call A file is initially read using demand paging. A page- sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses. Simplifies file access by treating file I/O through memory rather than read() write() system calls Also allows several processes to map the same file allowing the pages in memory to be shared

39 Memory Mapped Files

40 Shared Memory in the Win32 API
For creating a region of shared, memory using memory mapped files in the Win32 API involves first creating a file mapping for the file Establish a view of the mapped file in a process's A second process can then open and create a view of the mapped file in its virtual address space

41 Allocating Kernel Memory
When a process running in user mode requests additional memory, pages are allocated from the list of free page frames maintained by the kernel. Kernel memory, is often allocated from a free-memory pool different from the list used to satisfy ordinary user-mode processes. The kernel requests memory for data structures of varying sizes, some of which are less than a page in size. Pages allocated to user-mode processes do not necessarily have to be in contiguous physical memory

42 Buddy System allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory is allocated from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of 2 Size is rounded up to the next highest power of 2. Drawback to the buddy system is that rounding up to the next highest power of 2 is very likely to cause fragmentation within allocated segments.

43 Slab Allocation A slab is made up of one or more physically contiguous pages. A cache consists of one or more slabs. single cache for each unique kernel data structure Each cache is populated with objects that are instantiations of the kernel data structure the cache represents The slab-allocation algorithm uses caches to store kernel objects.

44 The slab allocator provides two main benefits:
No memory is wasted due to fragmentation Memory requests can be satisfied quickly.


Download ppt "OPERATING SYSTEM CONCEPTS AND PRACTISE"

Similar presentations


Ads by Google