1 Token-ordered LRU an Effective Policy to Alleviate Thrashing Presented by Xuechen Zhang, Pei Yan ECE7995 Presentation.

Slides:



Advertisements
Similar presentations
Module 10: Virtual Memory
Advertisements

Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Virtual Memory (II) CSCI 444/544 Operating Systems Fall 2008.
Virtual Memory: Page Replacement
Paging: Design Issues. Readings r Silbershatz et al: ,
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Allocation of Frames Each process needs minimum number of pages
Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani.
Page 15/4/2015 CSE 30341: Operating Systems Principles Allocation of Frames  How should the OS distribute the frames among the various processes?  Each.
Chapter 101 Cleaning Policy When should a modified page be written out to disk?  Demand cleaning write page out only when its frame has been selected.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory OSC: Chapter 9. Demand Paging Copy-on-Write Page Replacement.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Virtual Memory.
High Performance Computing and Software Lab, W&M ALS '01, 11/10/ Adaptive Page Replacement to Protect Thrashing in Linux Song Jiang, Xiaodong Zhang.
ECE 7995 Caching And Prefetching Techniques In Computer System Professor: Dr. Song Jiang.
Operating System Support Focus on Architecture
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management 2010.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Chapter 11 – Virtual Memory Management Outline 11.1 Introduction 11.2Locality 11.3Demand Paging 11.4Anticipatory Paging 11.5Page Replacement 11.6Page Replacement.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Virtual Memory Chapter 8.
1 Lecture 9: Virtual Memory Operating System I Spring 2007.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Revisiting Virtual Memory.
©Brooks/Cole, 2003 Chapter 7 Operating Systems Dr. Barnawi.
Computer Organization and Architecture
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
A. Frank - P. Weisberg Operating Systems Virtual Memory Policies.
Page Replacement Algorithms
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Maninder Kaur VIRTUAL MEMORY 24-Nov
CSS430 Virtual Memory Textbook Ch9
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Memory Thrashing Protection in Multi-Programming Environment Xiaodong Zhang Ohio State University College of William and Mary In collaborations with Song.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
1 Virtual Memory Chapter 9. 2 Resident Set Size n Fixed-allocation policy u Allocates a fixed number of frames that remains constant over time F The number.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11 Page 1 CS 111 Online Working Sets Give each running process an allocation of page frames matched to its needs How do we know what its needs.
Memory Thrashing Protection in Multi-Programming Environment Xiaodong Zhang Ohio State University In collaborations with Song Jiang (Wayne State University)
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
Page Buffering, I. Pages to be replaced are kept in main memory for a while to guard against poorly performing replacement algorithms such as FIFO Two.
1 Page Replacement Algorithms. 2 Virtual Memory Management Fundamental issues : A Recap Key concept: Demand paging  Load pages into memory only when.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory.
Memory Thrashing Protection in Multi-Programming Environment
Chapter 2: The Linux System Part 4
ITEC 202 Operating Systems
Memory Thrashing Protection in Multi-Programming Environment
Day 23 Virtual Memory.
Day 24 Virtual Memory.
Lecture 10: Virtual Memory
CSE 120 Principles of Operating
Module 9: Virtual Memory
Chapter 9: Virtual-Memory Management
Page Replacement.
5: Virtual Memory Background Demand Paging
Chapter 9: Virtual Memory
Operating Systems CMPSC 473
Virtual Memory: Working Sets
Module 9: Virtual Memory
COMP755 Advanced Operating Systems
Chapter 9: Virtual Memory CSS503 Systems Programming
Presentation transcript:

1 Token-ordered LRU an Effective Policy to Alleviate Thrashing Presented by Xuechen Zhang, Pei Yan ECE7995 Presentation

2 Introduction Challenges Algorithm Design Performance Evaluation Related work Conclusions Outline

3 Memory Management for Multiprogramming In a multiprogramming environment, multiple concurrent processes share the same physical memory space to support their virtual memory. These processes compete for the limited memory to establish their respective working sets. Page replacement policy is responsible to coordinate the competition by dynamically allocating memory pages among the processes. Global LRU replacement policy is usually used to manage the aggregate process memory spaces as a whole.

4 Thrashing Definition –In virtual memory systems, thrashing may be caused by programs that present insufficient locality of reference: if the working set of a program cannot be effectively held within physical memory, then constant data swapping may cause thrashing. Problems  No program is able to establish its working set  Cause large page faults  Low CPU utilization  Execution of each program practically stops Questions? How thrashing develops in the kernel? How to deal with thrashing?

5 How thrashing develops in the kernel? Proc1 Proc2 CPU Memory demand paging IDLE Physical memory

6 The performance degradation under thrashing Dedicated Executions Concurrent Executions Memory shortage 42%. Process Execution Memory Page The time of first spike is extended by 70 times The time of a start of vortex is extended by 15 times

7 Insights into Thrashing A page frame of a process becomes a replacement candidate in the LRU algorithm if the page has not been used for a certain period of time. There are two conditions under which a page is not accessed by its owner process: 1)the process does not need to access the page; 2)the process is conducting page faults (sleeping) so that it is not able to access the page although it might have done so without the page faults. We call the LRU pages generated on the first condition true LRU pages, and those on the second condition false LRU pages. These false LRU pages are produced by the time delay of page faults, not by the access delay of the program.  The LRU principle is not maintained. (Temporal Locality is not applicable!)  The amount false LRU pages is a status indicator : seriously thrashing. However, LRU page replacement implementations do not discriminate between these two types of LRU pages, and treats them equally!

8 Challenges How to distinguish two kinds of page faults? How to implement a lightweight thrashing prevention mechanism?

9 Algorithm Design Why token? True LRU Page Faults Processes I/O Processes False LRU Page Faults I/O Token

10 Algorithm Design The basic idea:  Set a token in the system.  The token is taken by one of the processes when page faults occur.  The system eliminates the false LRU pages from the process holding the token to allow it to quickly establish its working set.  The token process is expected to complete its execution and release its allocated memory as well as its token.  Other processes then compete for the token and complete their runs in turn. By transferring privilege among processes in thrashing from one to another, the system can reduce the total number of false LRU pages and to transform the chaotic order of page usages to an arranged order. The policy can be designed to allow token transferred more intelligently among processes to address issues such as fairness and starvation.

11 Considerations about the Token-ordered LRU  Which process to receive the token?  A process whose memory shortage is urgent.  How long does a process hold the token?  It should be adjustable based on the seriousness of the thrashing.  What happens if thrashing is too serious?  It becomes a load control mechanism by setting a long token time so that each program has to be executed one by one.  Can Multi-tokens be effective for thrashing?  The token and its variations have been implemented in the Linux kernel.

12 Performance evaluation Experiment environment –System environment –Benchmark programs –Performance metrics

13 Performance evaluation (Cont'd) System environment

14 Performance Evaluation (Cont'd) Benchmark programs

15 Performance evaluation (Cont'd) Status quanta –MAD (Memory allocation demand) The total amount of requested memory space reflected in the page table of a program in pages –RSS (Resident set size) The total amount of physical memory used by a program in pages –NPF (Number of page faults) The number of page faults of a program –NAP (Number of accessed pages) The number of accessed pages by a program within the time interval of 1s

16 Performance evaluation (Cont'd) Slowdown –Ratio between the execution time of an interacting program and its execution time in a dedicated environment without major page faults. –Measure the performance degradation of an interacting program

17 Performance Evaluation (Cont'd) Quickly acquiring memory allotments –Apsi, bit-r, gzip and m-m

18 Performance Evaluation (Cont'd) Gradually acquiring memory allotments –Mcf, m-sort, r-wing and vortex

19 Performance Evaluation (Cont'd) Non-regularly changing memory allotments –Gcc and LU

20 Performance evaluation (Cont'd) Interaction group (gzip&vortex3) Without token Token At the execution time of 250th second, the token was taken by votex.

21 Performance evaluation (Cont'd) Interaction group (bit-r&gcc) Without token Token At the execution time of 146th second, the token was taken by gcc.

22 Performance evaluation (Cont'd) Interaction group (vortex3&gcc) Without token Token At the execution time of 397th second, the token was taken by gcc.

23 Performance evaluation (Cont'd) Interaction groups (vortex1&vortex3) Without token Token At the execution time of 433th second, the token was taken by Vortex1.

24 Related work Local page replacement –The paging system select victim pages for a process only from its fixed size memory space Pros –No interference among multi-programs. Cons –Could not adapt dynamic memory changes of programs. –Under-utilization of memory space

25 Related work (Cont'd) Working set model –A working set (WS) of a program is a set of its recently used pages, which is a subset of all pages of the program which has been referenced in the previous time units (Working set window). Pros –Theoretically eliminate thrashing caused by chaotic memory competition. Cons –The implementation of this model is extremely expensive since it needs to track WS of each process.

26 Related work (Cont'd) Load control –Adjust the memory demands from multiple processes by changing the multiprogramming level. (Suspend/Active) Pros –Solve page thrashing completely. Cons –Act in a brutal force manner. –Cause other synchronous processes severely delayed. –Difficult to determine right time to do so. –Expensive to rebuild the working set.

27 Conclusions Distinguish between true LRU fault and false LRU fault. Design token-ordered LRU algorithm to eliminate false LRU faults. Experiments show that the algorithm effectively alleviates thrashing.

28