Presentation is loading. Please wait.

Presentation is loading. Please wait.

You can pick up your graded Homework #3 now or after class

Similar presentations


Presentation on theme: "You can pick up your graded Homework #3 now or after class"— Presentation transcript:

1 You can pick up your graded Homework #3 now or after class
Outline Announcement Virtual memory Overview Page replacement algorithms You can pick up your graded Homework #3 now or after class week13-2.ppt 2001

2 Announcement Homework #5 will be due on Nov. 25, 2003
At the beginning of the class No late submission will be accepted as I will discuss the solutions and answer any questions regarding Homework #5 There will be a quiz on Nov. 25, 2003 It will cover memory management Recitation class on Nov. 26, 2003? Let’s decide what we want to do now I have the graded Homework #3 and you can pick it up after class 11/7/2018 COP4610

3 Virtual Memory Question
Given the memory mapping schemes we have covered so far, can we run a program that is larger than the physical memory? Why or why not? 11/7/2018 COP4610

4 Time and space multiplexing
The processor is a time resource It can only be time multiplexed Memory is a space resource We have looked at space multiplexing of memory We divide the memory into pages and segments We have several processes in the memory at the same time sharing the memory Now we will look at time multiplexing of memory 11/7/2018 COP4610

5 Time and space multiplexing of hotel rooms
11/7/2018 COP4610

6 Time multiplexing memory
Swapping move part/whole programs in and out of memory (to disk or tape) allowed time-sharing in early OSs Overlays move parts of program in and out of memory (to disk or tape) allowed the running of programs that were larger than the physical memory available widely used in early PC systems 11/7/2018 COP4610

7 Overlays Keep in memory only those instructions and data that are needed at any given time. Needed when process is larger than amount of memory allocated to it. Implemented by user, no special support needed from operating system, programming design of overlay structure is complex 11/7/2018 COP4610

8 Overlays – cont. 11/7/2018 COP4610

9 Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft Windows. 11/7/2018 COP4610

10 Schematic View of Swapping
11/7/2018 COP4610

11 Swapping 11/7/2018 COP4610

12 Virtual memory 11/7/2018 COP4610

13 Implementation of virtual memory
Virtual memory allows time multiplexing of memory users to see a larger (virtual) address space than the physical address space the operating system to split up a process into physical memory and secondary memory Implementation requires extensive hardware assistance and a lot of OS code and time but it is worth it 11/7/2018 COP4610

14 Implementation of virtual memory – cont.
Separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical address space can therefore be much larger than physical address space. Need to allow pages to be swapped in and out. Virtual memory can be implemented via: Demand paging Demand segmentation 11/7/2018 COP4610

15 Demand Paging Bring a page into memory only when it is needed.
Less I/O needed Less memory needed Faster response More users Page is needed  reference to it invalid reference  abort not-in-memory  bring to memory 11/7/2018 COP4610

16 Valid-Invalid Bit With each page table entry a valid–invalid bit is associated (1  in-memory, 0  not-in-memory) Initially valid–invalid but is set to 0 on all entries. Example of a page table snapshot. During address translation, if valid–invalid bit in page table entry is 0  page fault. 1 Frame # valid-invalid bit page table 11/7/2018 COP4610

17 Page Fault If there is ever a reference to a page, first reference will trap to OS  page fault OS looks at another table to decide: Invalid reference  abort. Just not in memory. Get empty frame. Page replacement – find some page frame in memory, but not really in use, swap it out Swap page into the frame. Reset tables, validation bit = 1. 11/7/2018 COP4610

18 Modified bit Most paging hardware also has a modified bit in the page table entry which is set when the page is written to This is also called the “dirty bit” pages that have been changed are referred to as “dirty” these pages must be written out to disk because the disk version is out of date 11/7/2018 COP4610

19 Performance of Demand Paging
Page Fault Rate 0  p  1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + [swap page out ] + swap page in + restart overhead) 11/7/2018 COP4610

20 Demand Paging Performance Example
Memory access time = 1 microsecond 50% of the time the page that is being replaced has been modified and therefore needs to be swapped out. Swap Page Time = 10 msec = 10,000 msec EAT = (1 – p) x 1 + p (15000) = p (in msec) 11/7/2018 COP4610

21 Locality Programs do not access their address space uniformly
they access the same location over and over Spatial locality: processes tend to access location near to location they just accessed because of sequential program execution because data for a function is grouped together Temporal locality: processes tend to access data over and over again because of program loops because data is processed over and over again 11/7/2018 COP4610

22 Practicality of paging
Paging only works because of locality at any one point in time programs don’t need most of their pages Page fault rates must be very, very low for paging to be practical like one page fault per 100,000 or more memory references 11/7/2018 COP4610

23 Demand paging Summary Demand paging
Bring a page into memory only when it is needed Page fault First reference to a page not in memory generates a page fault Page fault is handled by the O.S Needs an empty page frame 11/7/2018 COP4610

24 Page replacement Page replacement
When there is no empty frame in memory, we need to find a page to replace Write it out to the swap area if it has been changed since it was read in from the swap area Dirty bit or modified bit which page to remove from memory to make room for a new page We need a page replacement algorithm Two categories of replacement algorithms Static paging algorithms always replace a page from the process that is bringing in a new page Dynamic paging algorithm can replace any page 11/7/2018 COP4610

25 Page references Processes continually reference memory
and so generate a sequence of page references The page reference sequence tells us everything about how a process uses memory For a given size, we only need to consider the page number If we have a reference to a page, then immediately following references to the page will never generate a page fault 0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103 0104, 0101, 0610, 0103, 0104, 0101, 0609, 0102, 0105 Suppose the page size is 100 bytes, what is the page reference sequence? We use page reference sequences to evaluate paging algorithms 11/7/2018 COP4610

26 Page replacement algorithms
The goal of a page replacement algorithm is to produce the fewest page faults We can compare two algorithms on a range of page reference sequences Or we can compare an algorithm to the best possible algorithm We will start by considering static page replacement algorithms 11/7/2018 COP4610

27 Optimal replacement algorithm
The one that produces the fewest possible page faults on all page reference sequences Algorithm: replace the page that will not be used for the longest time in the future Problem: it requires knowledge of the future Not realizable in practice but it is used to measure the effectiveness of realizable algorithms 11/7/2018 COP4610

28 Belady’s Optimal Algorithm
Replace page that will not be used for longest period of time. Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame 11/7/2018 COP4610

29 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame FWD4(2) = 1 FWD4(0) = 2 FWD4(3) = 3 11/7/2018 COP4610

30 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame FWD4(2) = 1 FWD4(0) = 2 FWD4(3) = 3 11/7/2018 COP4610

31 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame 11/7/2018 COP4610

32 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame FWD7(2) = 2 FWD7(0) = 3 FWD7(1) = 1 11/7/2018 COP4610

33 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame FWD10(2) =  FWD10(3) = 2 FWD10(1) = 3 11/7/2018 COP4610

34 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame FWD13(0) =  FWD13(3) =  FWD13(1) =  11/7/2018 COP4610

35 Belady’s Optimal Algorithm
Replace page with maximal forward distance: yt = max xeS t-1(m)FWDt(x) Let page reference stream, R = Frame 10 page faults Perfect knowledge of R  optimal performance Impossible to implement 11/7/2018 COP4610

36 Theories of program behavior
All replacement algorithms try to predict the future and act like the optimal algorithm All replacement algorithms have a theory of how program behave they use it to predict the future, that is, when pages will be referenced then the replace the page that they think won’t be referenced for the longest time. 11/7/2018 COP4610

37 Random page replacement
Algorithm: replace a page randomly Theory: we cannot predict the future at all Implementation: easy Performance: poor but it is easy to implement but the best case, worse case and average case are all the same 11/7/2018 COP4610

38 Random page replacement
11/7/2018 COP4610

39 Random Replacement Replaced page, y, is chosen from the m loaded page frames with probability 1/m Let page reference stream, R = Frame 1 2 2 2 2 3 2 1 3 2 1 3 2 1 3 1 3 1 3 1 2 1 2 3 2 3 2 6 2 4 6 2 4 5 2 7 5 2 No knowledge of R  not perform well Easy to implement 13 page faults 11/7/2018 COP4610

40 FIFO page replacement Algorithm: replace the oldest page
Theory: pages are used for a while and then stop being used Implementation: easy Performance: poor because old pages are often accessed, that is, the theory if FIFO is not correct 11/7/2018 COP4610

41 FIFO page replacement 11/7/2018 COP4610

42 First In First Out (FIFO)
Replace page that has been in memory the longest: yt = max xeS t-1(m)AGE(x) Let page reference stream, R = Frame 11/7/2018 COP4610

43 First In First Out (FIFO)
Replace page that has been in memory the longest: yt = max xeS t-1(m)AGE(x) Let page reference stream, R = Frame AGE4(2) = 3 AGE4(0) = 2 AGE4(3) = 1 11/7/2018 COP4610

44 First In First Out (FIFO)
Replace page that has been in memory the longest: yt = max xeS t-1(m)AGE(x) Let page reference stream, R = Frame AGE4(2) = 3 AGE4(0) = 2 AGE4(3) = 1 11/7/2018 COP4610

45 First In First Out (FIFO)
Replace page that has been in memory the longest: yt = max xeS t-1(m)AGE(x) Let page reference stream, R = Frame AGE5(1) = ? AGE5(0) = ? AGE5(3) = ? 11/7/2018 COP4610

46 Belady’s Anomaly FIFO with m = 3 has 9 faults
Let page reference stream, R = Frame FIFO with m = 3 has 9 faults FIFO with m = 4 has 10 faults 11/7/2018 COP4610

47 Least Frequently Used (LFU)
Replace page with minimum use: yt = min xeS t-1(m)FREQ(x) Let page reference stream, R = Frame FREQ4(2) = 1 FREQ4(0) = 1 FREQ4(3) = 1 11/7/2018 COP4610

48 Least Frequently Used (LFU)
Replace page with minimum use: yt = min xeS t-1(m)FREQ(x) Let page reference stream, R = Frame FREQ4(2) = 1 FREQ4(0) = 1 FREQ4(3) = 1 11/7/2018 COP4610

49 Least Frequently Used (LFU)
Replace page with minimum use: yt = min xeS t-1(m)FREQ(x) Let page reference stream, R = Frame FREQ6(2) = 2 FREQ6(1) = 1 FREQ6(3) = 1 11/7/2018 COP4610

50 Least Frequently Used (LFU)
Replace page with minimum use: yt = min xeS t-1(m)FREQ(x) Let page reference stream, R = Frame FREQ7(2) = ? FREQ7(1) = ? FREQ7(0) = ? 11/7/2018 COP4610

51 LRU page replacement Least-recently used (LRU)
Algorithm: remove the page that hasn’t been referenced for the longest time Theory: the future will be like the past, page accesses tend to be clustered in time Implementation: hard, requires hardware assistance (and then still not easy) Performance: very good, within 30%-40% of optimal 11/7/2018 COP4610

52 LRU model of the future 11/7/2018 COP4610

53 LRU page replacement 11/7/2018 COP4610

54 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame BKWD4(2) = 3 BKWD4(0) = 2 BKWD4(3) = 1 11/7/2018 COP4610

55 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame BKWD4(2) = 3 BKWD4(0) = 2 BKWD4(3) = 1 11/7/2018 COP4610

56 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame BKWD5(1) = 1 BKWD5(0) = 3 BKWD5(3) = 2 11/7/2018 COP4610

57 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame BKWD6(1) = 2 BKWD6(2) = 1 BKWD6(3) = 3 11/7/2018 COP4610

58 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame 11/7/2018 COP4610

59 Least Recently Used (LRU)
Replace page with maximal backward distance: yt = max xeS t-1(m)BKWDt(x) Let page reference stream, R = Frame Backward distance is a good predictor of forward distance -- locality 11/7/2018 COP4610

60 Approximating LRU LRU is difficult to implement
usually it is approximated in software with some hardware assistance We need a referenced bit in the page table entry turned on when the page is accessed can be turned off by the OS 11/7/2018 COP4610

61 First LRU approximation
When you get a page fault replace any page whose referenced bit is off then turn off all the referenced bits Two classes of pages Pages referenced since the last page fault Pages not referenced since the last page fault the least recently used page is in this class but you don’t know which one it is A crude approximation of LRU 11/7/2018 COP4610

62 Second LRU approximation
Algorithm: Keep a counter for each page Have a daemon wake up every 500 ms and add one to the counter of each page that has not been referenced zero the counter of pages that have been referenced turn off all referenced bits When you get a page fault replace the page whose counter is largest Divides pages into 256 classes 11/7/2018 COP4610

63 LRU and its approximations
11/7/2018 COP4610

64 A clock algorithm 11/7/2018 COP4610

65 Clock algorithms Clock algorithms try to approximate LRU but without extensive hardware and software support The page frames are (conceptually) arranged in a big circle (the clock face) A pointer moves from page frame to page frame trying to find a page to replace and manipulating the referenced bits We will look at three variations FIFO is actually the simplest clock algorithm 11/7/2018 COP4610

66 Basic clock algorithm When you need to replace a page
look at the page frame at the clock hand if the referenced bit = 0 then replace the page else set the referenced bit to 0 and move the clock hand to the next page Pages get one clock hand revolution to get referenced 11/7/2018 COP4610

67 Second chance algorithm
When you need to replace a page look at the page frame at the clock hand if (referencedBit=0 && modifiedBit=0) then replace the page else if(referencedBit=0 && modifiedBit=1) then set modifiedBit to 0 and start writing the page to disk and move on else set referencedBit to 0 and move on Dirty pages get a second chance because they are more expensive to replace 11/7/2018 COP4610

68 Clock algorithm flow chart
11/7/2018 COP4610

69 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame LRU Frame 11/7/2018 COP4610

70 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame LRU Frame 11/7/2018 COP4610

71 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame LRU Frame 11/7/2018 COP4610

72 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame LRU Frame 11/7/2018 COP4610

73 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame LRU Frame 11/7/2018 COP4610

74 Stack Algorithms Some algorithms are well-behaved
Inclusion Property: Pages loaded at time t with m is also loaded at time t with m+1 Frame FIFO Frame 11/7/2018 COP4610

75 Dynamic Paging Algorithms
The amount of physical memory -- the number of page frames -- varies as the process executes How much memory should be allocated? Fault rate must be “tolerable” Will change according to the phase of process Need to define a placement & replacement policy Contemporary models based on working set 11/7/2018 COP4610

76 Working Set Principle Process pi should only be loaded and active if it can be allocated enough page frames to hold its entire working set The size of the working set is estimated using w Unfortunately, a “good” value of w depends on the size of the locality Empirically this works with a fixed w 11/7/2018 COP4610

77 Working set A page in memory is said to be resident
The working set of a process is the set of pages it needs to be resident in order to have an acceptable low paging rate. Operationally: Each page reference is a unit of time The page reference sequence is: r1, r2, …, rT where T is the current time W(T,t) = {p | p = rt where T-t < t <= T } is the working set, where t is the window size 11/7/2018 COP4610

78 The Working Set Window ri R = … 0 3 1 2 1 1 0 3 0 1 2 …
The “Window” At virtual time i-1: working set = {0, 1} At virtual time i: working set = {0, 1, 3} 11/7/2018 COP4610

79 Program phases Processes tend to have stable working sets for a period of time called a phase Then they change phases to a different working set Between phases the working set is unstable and we get lots of page faults 11/7/2018 COP4610

80 Working set phases 11/7/2018 COP4610

81 Working set algorithm Algorithm
Keep track of the working set of each running process Only run a process if its entire working set fits in memory – called working set principle 11/7/2018 COP4610

82 Working set algorithm example
11/7/2018 COP4610

83 Working set algorithm example – cont.
11/7/2018 COP4610

84 Working set algorithm example – cont.
11/7/2018 COP4610

85 Implementing Working Set Algorithm
Too hard to implement so this algorithm is only approximated WSClock algorithm A clock algorithm that approximates the working set algorithm It needs to keep the last reference time of each page frame When the reference bit is set, the last reference time was set to the current time When a page fault occurs Inspect each frame as in the clock algorithm When a frame’s reference bit is set, then determine whether it is in the working set by comparing the last reference time 11/7/2018 COP4610

86 Evaluating paging algorithms
Mathematical modeling powerful where it works but most real algorithms cannot be analyzed Measurement implement it on a real system and measure it extremely expensive Simulation Test on page reference traces reasonably efficient effective 11/7/2018 COP4610

87 Performance of paging algorithms
11/7/2018 COP4610

88 Thrashing VM allows more processes in memory, so one is more likely to be ready to run If CPU usage is low, it is logical to bring more processes into memory But, low CPU use may to due to too many pages faults because there are too many processes competing for memory Bringing in processes makes it worse, and leads to thrashing 11/7/2018 COP4610

89 Thrashing Diagram There are too many processes in memory and no process has enough memory to run. As a result, the page fault is very high and the system spends all of its time handling page fault interrupts. 11/7/2018 COP4610

90 Load control Load control: deciding how many processes should be competing for page frames too many leads to thrashing too few means that memory is underused Load control determines which processes are running at a point in time the others have no page frames and cannot run CPU load is a bad load control measure Page fault rate is a good load control measure 11/7/2018 COP4610

91 Load control and page replacement
11/7/2018 COP4610

92 Two levels of scheduling
11/7/2018 COP4610

93 Load control algorithms
A load control algorithm measures memory load and swaps processes in and out depending on the current load Load control measures rotational speed of the clock hand average time spent in the standby list page fault rate 11/7/2018 COP4610

94 Page fault frequency load control
L = mean time between page faults S = mean time to service a page fault Try to keep L = S if L < S, then swap a process out if L > S, then swap a process in If L = S, then the paging system can just keep up with the page faults 11/7/2018 COP4610

95 Windows NT Paging System
Reference to Address k in Page i (User space) Virtual Address Space Lookup (Page i, Addr k) Reference (Page Frame j, Addr k) Primary Memory Translate (Page i, Addr k) to (Page Frame j, Addr k) Paging Disk (Secondary Memory) Supv space User space 11/7/2018 COP4610

96 Windows Address Translation
Virtual page number Line number Page Directory Page Table Byte Index a b A Page Directory c B Page Tables C Target Page Target Byte 11/7/2018 COP4610

97 Linux Virtual Address Translation
11/7/2018 COP4610

98 NT Memory-mapped Files
Secondary memory Ordinary file Open the file Create a section object (that maps file) Identify point in address space to place the file Executable memory Memory mapped files Section object 11/7/2018 COP4610

99 NT Memory-mapped Files – cont.
11/7/2018 COP4610

100 Summary Virtual memory Time multiplexing of primary memory
Can run a program larger than the primary memory Separates logical and physical memory address spaces through memory hierarchy 11/7/2018 COP4610

101 Summary – cont. Demand paging
Bring a page into primary memory only when it is needed When the referenced page is valid but not in memory Get an empty frame If no empty frame, use page replacement algorithm to find a frame and swap it out if needed Swap the reference page in Update the page table 11/7/2018 COP4610


Download ppt "You can pick up your graded Homework #3 now or after class"

Similar presentations


Ads by Google