Online Algorithm Huaping Wang Apr.21

Slides:



Advertisements
Similar presentations
You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Advertisements

2. Getting Started Heejin Park College of Information and Communications Hanyang University.
October 31, 2005Copyright © by Erik D. Demaine and Charles E. LeisersonL13.1 Introduction to Algorithms LECTURE 11 Amortized Analysis Dynamic tables.
Advanced Piloting Cruise Plot.
Chapter 1 The Study of Body Function Image PowerPoint
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
On-line Construction of Suffix Trees Chairman : Prof. R.C.T. Lee Speaker : C. S. Wu ( ) June 10, 2004 Dept. of CSIE National Chi Nan University.
©2001 by Charles E. Leiserson Introduction to AlgorithmsDay 9 L6.1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 6 Prof. Erik Demaine.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
My Alphabet Book abcdefghijklm nopqrstuvwxyz.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Addition Facts
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
SE-292 High Performance Computing
Randomized Algorithms Randomized Algorithms CS648 1.
Data Structures ADT List
Chapter 4 Memory Management Basic memory management Swapping
Chapter 1 Object Oriented Programming 1. OOP revolves around the concept of an objects. Objects are created using the class definition. Programming techniques.
ABC Technology Project
Hash Tables.
CS 241 Spring 2007 System Programming 1 Memory Replacement Policies Lecture 32 Klara Nahrstedt.
Page Replacement Algorithms
Cache and Virtual Memory Replacement Algorithms
Chapter 3.3 : OS Policies for Virtual Memory
Chapter 11 – Virtual Memory Management
Module 10: Virtual Memory
Approximating the Optimal Replacement Algorithm Ben Juurlink.
Chapter 3 Memory Management
Chapter 10: Virtual Memory
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
Virtual Memory II Chapter 8.
Memory Management.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
1 CMPT 300 Introduction to Operating Systems Page Replacement Sample Questions.
Learning Cache Models by Measurements Jan Reineke joint work with Andreas Abel Uppsala University December 20, 2012.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
1 Undirected Breadth First Search F A BCG DE H 2 F A BCG DE H Queue: A get Undiscovered Fringe Finished Active 0 distance from A visit(A)
Green Eggs and Ham.
VOORBLAD.
Review Pseudo Code Basic elements of Pseudo code
1 Breadth First Search s s Undiscovered Discovered Finished Queue: s Top of queue 2 1 Shortest path from s.
1..
© 2012 National Heart Foundation of Australia. Slide 2.
ITEC200 Week10 Sorting. pdp 2 Learning Objectives – Week10 Sorting (Chapter10) By working through this chapter, students should: Learn.
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
1 Chapter 4 The while loop and boolean operators Samuel Marateck ©2010.
GG Consulting, LLC I-SUITE. Source: TEA SHARS Frequently asked questions 2.
Addition 1’s to 20.
25 seconds left…...
Januar MDMDFSSMDMDFSSS
Week 1.
1 Let’s Recapitulate. 2 Regular Languages DFAs NFAs Regular Expressions Regular Grammars.
We will resume in: 25 Minutes.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
PSSA Preparation.
Topic 16 Sorting Using ADTs to Implement Sorting Algorithms.
Insertion Sort Introduction to Algorithms Insertion Sort CSE 680 Prof. Roger Crawfis.
12-Apr-15 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Competitive Analysis.
Memory Management Last Update: July 31, 2014 Memory Management1.
Online Paging Algorithm By: Puneet C. Jain Bhaskar C. Chawda Yashu Gupta Supervisor: Dr. Naveen Garg, Dr. Kavitha Telikepalli.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
Memory Management 5/11/2018 9:49 PM
Memory Management 6/20/ :27 PM
Presentation transcript:

Online Algorithm Huaping Wang Apr.21 Cache Algorithms Online Algorithm Huaping Wang Apr.21

Outline Introduction of cache algorithms A worst-case competitive analysis of FIFO and LRU algorithms The randomized marker algorithm Competitive analysis for the randomized marker algorithm Conclusion

Online algorithm An Online algorithm responds to a sequence of service requests,each an associated cost. If an algorithm is given the entire sequence of service requests in advance, it is said to be an offline algorithm. We often employ a competitive analysis to analyze an online algorithm.

Cache memory Revisiting information of web pages shows spatial locality and temporal locality. Cache memory is used to store copies of web pages to exploit these localities of reference. We assume that a web page can be placed in any slot of the cache. This is known as a fully associative cache.

Cache algorithms (Page replacement policies) First-in, First-out(FIFO): Evict the page that has been in the cache the longest Least recently used (LRU): Evict the page whose last request occurred furthest in the past. Random: Choose a page at random to evict from the cache.

The Random,FIFO,and LRU page replacement policies New block Old block (chosen at random) Random policy: FIFO policy: Insert time: 8:00 am 7:48am 9:05am 7:10am 7:30 am 10:10am 8:45am New block Old block(present longest) last used: 7:25am 8:12am 9:22am 6:50am 8:20am 10:02am 9:50am LRU policy: New block Old block(least recently used) The Random,FIFO,and LRU page replacement policies

Complexity of implement(Random) It only requires a random or pseudo-random number generator Overhead is an O(1) additional amount of work per page replacement Makes no attempt to take advantage of any temporal or spatial localities

Complexity of implement(FIFO) FIFO strategy just requires a queue Q to store references to the pages in the cache Pages are enqueued in Q Simply performs a dequeue operation on Q to determine which page to evict. This policy requires O(1) additional work per page replacement Try to take advantage of temporal locality

Complexity of implement(LRU) Implementing the LRU strategy requires the use of a priority queue Q When insert a page in Q or update its key, the page is assigned the highest key in Q Each page request and page replacement is O(1) if Q is implemented with a sorted sequence based on a linked list Because of the constant-time overhead and extra space for the priority Queue Q,make this policy less attractive from a practical point of view

Competitive analysis Compare a particular online algorithm A to an optimal offline algorithm,OPT. Given a particular sequence P =(p1,p2,…,pn) of service requests, Let cost(A,P)denote the cost of A on P Let cost(OPT,P) denote the cost of the optimal algorithm on P.

Competitive analysis(cont.) The algorithm A is said to be c-competitive for P if cost(A,P)≤c*cost(OPT,P) + b, For some constant b≥0. If A is c-competitive for every sequence P, then we simply say that A is c-competitive,and we call c the competitive ratio of A. If b=0,then we say that the algorithm A has a strict competitive ratio of c

A worst-case competitive analysis of FIFO and LRU Suppose a cache containing m pages Consider the FIFO and LRU methods performing page replacement for a program that has a loop that repeatedly request m+1 pages in a cyclic order So for a sequence P =(p1,p2,…pn) of page requests,FIFO and LRU will evict n times

A worst-case competitive analysis of FIFO and LRU(cont.) For OPT,is to evict from the cache the page that is requested the furthest into the future So the OPT policy will perform a page replacement once every m requests Both FIFO and LRU are c-competitive on this sequence P, where

The randomized marker algorithm Emulates the best aspects of the deterministic LRU policy, Using randomization to avoid the worst-case situations that are bad for the LRU strategy.

The randomized marker algorithm(cont.) Associate,with each page in the cache, a Boolean variable “marked”, which is initially set to “false” for every page in the cache If a browser requests a page that is already in the cache, that page’s marked variable is set to “true”. Otherwise, if a browser requests a page that is not in the cache, a random page whose marked variable is “false” is evicted and replaced with the new page, whose marked variable is immediately set to “true”

The randomized marker algorithm(cont.) If all the pages in the cache have marked variables set to “true”,then all of them are reset to “false” New block Old block (unmarked) Marker policy: marked: The Marker page replacement policy

Competitive analysis for marker algorithm Let P =(p1,p2,…,pn) be a sufficiently long sequence of page requests. The Marker policy implicitly partitions the requests in P into rounds Each round begins with all the pages in the cache having “false” marked labels A round ends when all the pages in the cache have “true”marked labels(with the next request beginning the next round,since the policy then resets each such label to “false”)

Competitive analysis for marker algorithm(cont.) Consider the i th round in P,and call a page requested in round i fresh if it is not in the Marker’s cache at the beginning of round i We refer to a page in the Marker’s cache that has a false marked label stale. Let denote the number of fresh pages referenced in the i th round Let denote the number of pages that are in the cache for OPT algorithm at the beginning of round i and are not in the cache for the marker policy at this time

The sate of Marker’s cache and OPT’s cache at the beginning of round i marked: mi fresh blocks to be referenced in round i all stale OPT’s cache: bi blocks not in Marker’s cache blocks also in Marker’s cache The sate of Marker’s cache and OPT’s cache at the beginning of round i

Page replacements of OPT The algorithm OPT must perform at least page replacements in round i Summing over all k rounds in P then,

Page replacements of marker policy The expected number of page replacements performed by the Marker policy is , is the expected number of stale pages that are referenced in round i after having been evicted from the cache

Page replacements of marker policy(cont.) At the point in round i when a stale page v is referenced, the probability that v is out of the cache is at most f/g f is the number of fresh pages referenced before page v g is the number of stale pages that have not yet been referenced.

Page replacements of marker policy(cont.) The cost to the marker policy will be highest then, if all requests to fresh pages are made before any requests to stale pages So the expected number of evicted stale pages referenced in round i can be bounded as follows

Page replacements of marker policy(cont.) Noting that this summation is known as the m th harmonic number, which is denoted , we have Thus, the expected number of page replacements performed by the Marker policy is at most

Competitive analysis for marker algorithm(cont.) Therefore, the competitive ratio for the Marker policy is at most Using an approximation of , namely that ,the competitive ratio for the Marker policy is at most

Conclusion Based on the experimental comparisons, the ordering of policies, from best to worst, is as follows: (1) LRU,(2) FIFO,and(3) Random. But LRU still has poor performance in the worst case. The competitive analysis shows that the Marker policy is fairly efficient

Thanks