Computer Architecture

Slides:



Advertisements
Similar presentations
9.4 Page Replacement What if there is no free frame?
Advertisements

CS 241 Spring 2007 System Programming 1 Memory Replacement Policies Lecture 32 Klara Nahrstedt.
Page Replacement Algorithms
Page Replacement Algorithms
Cache and Virtual Memory Replacement Algorithms
Module 10: Virtual Memory
Chapter 10: Virtual Memory
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
§12.4 Static Paging Algorithms
Virtual Memory Management G. Anuradha Ref:- Galvin.
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
03/31/2004CSCI 315 Operating Systems Design1 Allocation of Frames & Thrashing (Virtual Memory)
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 15: Background Information for the VMWare ESX Memory Management.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Chapter 10: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 10: Virtual Memory.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management Virtual Memory Page replacement algorithms
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Maninder Kaur VIRTUAL MEMORY 24-Nov
Virtual Memory.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
Operating Systems CMPSC 473 Virtual Memory Management (3) November – Lecture 20 Instructor: Bhuvan Urgaonkar.
By Andrew Yee. Virtual Memory Memory Management What is Page Replacement?
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
CSC 360, Instructor: Kui Wu Memory Management II: Virtual Memory.
Memory Management & Virtual Memory © Dr. Aiman Hanna Department of Computer Science Concordia University Montreal, Canada.
CS307 Operating Systems Virtual Memory Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2012.
Computer Architecture Foundations for Graduate Level Students.
Virtual Memory The address used by a programmer will be called a virtual address or logical address. An address in main memory is called a physical address.
Chapter 9: Virtual-Memory Management. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 9: Virtual-Memory Management 9.1 Background.
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
CACHE MEMORY CS 147 October 2, 2008 Sampriya Chandra.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Virtual Memory Chapter 8.
OPERATING SYSTEM CONCEPTS AND PRACTISE
Chapter 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 8 Virtual Memory
Lecture 10: Virtual Memory
Review.
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
Lecture 39 Syed Mansoor Sarwar
Lecture 40 Syed Mansoor Sarwar
5: Virtual Memory Background Demand Paging
Chapter 9: Virtual Memory
Demand Paged Virtual Memory
Chap. 12 Memory Organization
Chapter 6 Virtual Memory
Contents Memory types & memory hierarchy Virtual memory (VM)
Operating Systems CMPSC 473
Exercise (11).
Module IV Memory Organization.
Update : about 8~16% are writes
Module IV Memory Organization.
Lecture 9: Caching and Demand-Paged Virtual Memory
Module 9: Virtual Memory
Virtual Memory.
Module IV Memory Organization.
Presentation transcript:

Computer Architecture Part III-B: Cache Memory

Access Time If every memory reference to cache required transfer of one word between MM and cache, no increase in speed is achieved. In fact, speed will drop because apart from MM access, there is additional access to cache Suppose reference is repeated n times, and after the first reference, location is always found in the cache Average access time: tc= cache access time tm = main memory access time n = number of accesses/references

Cache Hit Ratio The probability that a word will be found in the cache Depends upon the program and the size and organization of the cache h = Number of times required word found in cache Total number of references h: hit ratio

Access Time ta = Average access time tc = Cache access time (1-h) = miss ratio tm = Memory access time

Fetch Mechanisms Demand Fetch Prefetch Selective Fetch Fetch a block from memory when it is needed and is not in the cache Prefetch Fetch block/s from memory before they are requested Selective Fetch Not always fetching blocks, dependent on some defined criterion; blocks are stored in MM rather than the cache

Write Mechanisms When words are read from the cache, contents are not modified However, in general, cache data can be modified and it is possible that data in cache is different from data in MM Two mechanisms to keep cache and MM in sync Write-through mechanism Write-back mechanism

Synchronization Mechanisms Write-through Every write operation to the cache is simultaneously repeated for MM Write-back The write operation to MM is only done during block-replacement time (i.e. a block displaced by incoming block might be written back to MM, regardless whether the block was altered or not)

Replacement Algorithms When the word being requested by the CPU is not in the cache, it needs to be transferred from MM. (or it can also be from secondary memory to MM) A page fault occurs when a page or a block is not in the cache (or MM in the case of secondary memory) Replacement algorithms determine which page/block to remove or overwrite

Characteristics Usage based or Non-usage based Usage based : the choice of page/block to replace is dependent on the how many times each page/block has been referenced Non-usage based : Use some other criteria for replacement

Assumptions For a given page size, we only need to consider the page/block number. If we have a reference (hit) to a page p, then any immediately succeeding references to p does not cause a page fault The size of memory/cache is represented as the number of pages it is capable of holding (page frame )

Example Consider the following address sequence calls: 0110 0432 0101 0612 0102 0103 0104 0101 0611 0102 0103 0302 which, at 100 bytes per page, can be reduced to the following access string: 1 4 1 6 1 6 1 3 This sequence of page requests is called a reference string.

Replacement Policies Random replacement algorithm First-in first-out replacement Optimal Algorithm Least recently used algorithm Least Frequently Used Most Frequently Used

Random Replacement A page is chosen randomly at page fault time There is no relationship between the pages or their use. Choice is done by a random number generator.

FIFO Memory treated as a queue Easy to understand and program When a page comes in, it is inserted at the tail When a page is removed, the entry at the head of the queue gets deleted Easy to understand and program Performance is not consistently good; dependent on reference string

FIFO Example Consider the following reference string: 7 0 1 2 0 3 0 4 2 With a page frame of 3 * * * * * * * * 7 0 1 2 2 3 0 4 2 7 0 1 1 2 3 0 4 7 0 0 1 2 3 0 An * indicates a miss (the page requested by the CPU is not in the cache or in MM)

FIFO Example #2 Consider the following reference string: 1 2 3 4 1 2 5 1 2 3 4 5 With a page frame of 3 * * * * * * * * * 1 2 3 4 1 2 5 5 5 3 4 4 1 2 3 4 1 2 2 2 5 3 3 1 2 3 4 1 1 1 2 5 5 We have 9 page faults Try performing this FIFO with a page frame of 4

Belady’s Anomaly An increase in page frame does not necessarily mean a decrease in page faults More formally, Belady’s anomaly reflects the fact that, for some page-replacement algorithms, the page fault rate may increase as the number of allocated frames increases

Optimal Algorithm The page that will not be used for the longest period of time is replaced Guarantees the lowest page fault rate for a fixed number of frames Difficult to implement because it requires future knowledge of the reference string

Optimal Algorithm Example Consider the following reference string: 7 0 1 2 0 3 0 4 2 With a page frame of 3 We look ahead and see that 7 is the page which will not be used again, so we replace 7; we also note that after our first hit we should not replace 0 immediately, but rather 1 because 1 will not be referenced any more (2 will be referenced last.) * * * * * * 7 0 1 2 2 3 3 4 4 7 0 1 1 2 2 3 3 7 0 0 0 0 2 2

Least Recently Used Approximates the optimal algorithm Replaces the page that has not been used for the longest period of time When all page frames have been used up and every time there is a page hit, the referenced page is placed at the tail to indicate it has been recently accessed

LRU Example Consider the following reference string: 7 0 1 2 0 3 0 4 0 3 0 2 With a page frame of 3 * * * * * * * 7 0 1 2 0 3 0 4 0 3 0 7 0 1 2 2 3 3 4 4 3 We have 7 page faults Try performing this LRU with a page frame of 4

Least Frequently Used Counts the number of references made to each page; when page is accessed, counter is incremented by one Page with smallest count is replaced FIFO is used to resolve a tie Rationale: Page with the bigger counter is an actively used page Problem Page initially actively may never be used again Solved by using a decaying counter

LFU Example Consider the following reference string: 7 0 1 2 0 3 0 4 0 3 0 2 With a page frame of 3 * * * * * * * 71 01 11 21 21 31 31 41 41 41 41 21 71 01 11 11 21 21 31 31 32 32 32 71 01 02 02 03 03 04 04 05 05 We have 7 page faults Try performing this LFU with a page frame of 4

Most Frequently Used Opposite of LFU Replace page with the highest count Tie is resolved using FIFO Based on the argument that the page with smallest count has just been probably brought in and is yet to be used Both LFU and MFU are not common and implementation is expensive.