Distributed Systems CS

Slides:



Advertisements
Similar presentations
Song Jiang1 and Xiaodong Zhang1,2 1College of William and Mary
Advertisements

Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
1 Virtual Memory Sample Questions Project 3 – Build Pthread Lib Extra Office Hour: Wed 4pm-5pm HILL 367 Recitation 6.
Caching and Demand-Paged Virtual Memory
Cache intro CSE 471 Autumn 011 Principle of Locality: Memory Hierarchies Text and data are not accessed randomly Temporal locality –Recently accessed items.
1 Last Time: Paging Motivation Page Tables Hardware Support Benefits.
1 Implementation of Relational Operations: Joins.
Virtual Memory.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
By Andrew Yee. Virtual Memory Memory Management What is Page Replacement?
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Database Applications (15-415) DBMS Internals: Part II Lecture 12, February 21, 2016 Mohammad Hammoud.
1.Why we need page replacement? 2.Basic page replacement technique. 3.Different type of page replacement algorithm and their examples.
Announcements Program 1 on web site: due next Friday Today: buffer replacement, record and block formats Next Time: file organizations, start Chapter 14.
CS222: Principles of Data Management Lecture #4 Catalogs, Buffer Manager, File Organizations Instructor: Chen Li.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Beyond Physical Memory: Policies
CS161 – Design and Architecture of Computer
Database Applications (15-415) DBMS Internals: Part II Lecture 11, October 2, 2016 Mohammad Hammoud.
Associativity in Caches Lecture 25
Lecture 16: Data Storage Wednesday, November 6, 2006.
Computer Architecture
Chapter 21 Virtual Memoey: Policies
Multilevel Memories (Improving performance using alittle “cash”)
CS222/CS122C: Principles of Data Management Lecture #3 Heap Files, Page Formats, Buffer Manager Instructor: Chen Li.
Database Management Systems (CS 564)
Storing Data: Disks, Buffers and Files
18742 Parallel Computer Architecture Caching in Multi-core Systems
Lecture 10: Buffer Manager and File Organization
Database Applications (15-415) DBMS Internals- Part III Lecture 15, March 11, 2018 Mohammad Hammoud.
Lecture 28: Virtual Memory-Address Translation
Database Applications (15-415) DBMS Internals: Part II Lecture 13, February 25, 2018 Mohammad Hammoud.
Lecture 9: Data Storage and IO Models
Distributed Systems CS
EECE.4810/EECE.5730 Operating Systems
Lecture 6 Memory Management
Replacement Policies Assume all accesses are: Cache Replacement Policy
Andy Wang Operating Systems COP 4610 / CGS 5765
Distributed Systems CS
CGS 3763 Operating Systems Concepts Spring 2013
Distributed Systems CS
Distributed Systems CS
Database Applications (15-415) DBMS Internals: Part III Lecture 14, February 27, 2018 Mohammad Hammoud.
Midterm Review – Part I ( Disk, Buffer and Index )
Andy Wang Operating Systems COP 4610 / CGS 5765
Database Management Systems (CS 564)
CS 140 Lecture Notes: Demand Paging
Caches III CSE 351 Autumn 2018 Instructor: Justin Hsia
Implementation of Relational Operations
Distributed Systems CS
Distributed Systems CS
CS 140 Lecture Notes: Demand Paging
Operating Systems CMPSC 473
CS222P: Principles of Data Management Lecture #3 Buffer Manager, PAX
Virtual Memory: Working Sets
Distributed Systems CS
Lecture 9: Caching and Demand-Paged Virtual Memory
Principle of Locality: Memory Hierarchies
Sarah Diesburg Operating Systems CS 3430
Andy Wang Operating Systems COP 4610 / CGS 5765
Sarah Diesburg Operating Systems CS 3430
Sarah Diesburg Operating Systems COP 4610
Presentation transcript:

Distributed Systems CS 15-440 Caching – Part III Lecture 22, November 29, 2018 Mohammad Hammoud

Today… Last Lecture: Today’s Lecture: Announcements: Cache Consistency Replacement Policies Announcements: Project 4 is out. It is due on December 12 by midnight PS5 is due on December 6 by midnight

Key Questions What data should be cached and when? Fetch Policy How can updates be made visible everywhere? Consistency or Update Propagation Policy What data should be evicted to free up space? Cache Replacement Policy

Key Questions What data should be cached and when? Fetch Policy How can updates be made visible everywhere? Consistency or Update Propagation Policy What data should be evicted to free up space? Cache Replacement Policy

Key Questions What data should be cached and when? Fetch Policy How can updates be made visible everywhere? Consistency or Update Propagation Policy What data should be evicted to free up space? Cache Replacement Policy

Working Sets Given a time interval T, WorkingSet(T) is defined as the set of distinct data objects accessed during T It is a function of the width of T Its size (or what is referred to as the working set size) is all what matters It captures the adequacy of the cache size with respect to the program behavior What happens if a client process performs repetitive accesses to some data, with a working set size that is larger than the underlying cache?

The LRU Policy: Sequential Flooding To answer this question, assume: Three pages, A, B, and C as fixed-size caching units An access pattern: A, B, C, A, B, C, etc. A cache pool that consists of only two frames (i.e., equal-sized page containers) Access A: Page Fault Access B: Page Fault Access C: Page Fault Access A: Page Fault Access B: Page Fault Access C: Page Fault Access A: Page Fault A B A C B A C B A C B A C . . . Although the access pattern exhibits temporal locality, no locality was exploited! This phenomenon is known as “sequential flooding” For this access pattern, MRU works better!

Types of Accesses Why LRU did not perform well with this access pattern, although it is “repeatable”? The cache size was dwarfed by the working set size As the time interval T is increased, how would the working set size change, assuming: Sequential accesses (e.g., unrepeatable full scans) It will monotonically increase The working set will render very cache unfriendly Regular accesses, which demonstrate typical good locality It will non-monotonically increase (e.g., increase and decrease then increase and decrease, but not necessarily at equal widths across program phases) The working set will be cache friendly only if the cache size does not get dwarfed by its size Random accesses, which demonstrate no or very little locality (e.g., accesses to a hash table) The working set will exhibit cache unfriendliness if its size is much larger than the cache size

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 # of Hits: # of Misses: # of Hits: # of Misses: # of Hits: # of Misses:

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 7 7 7 Frame 1 Frame 2 Frame 3 Frame 4 # of Hits: # of Misses: 1 # of Hits: # of Misses: 1 # of Hits: # of Misses: 1

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 0 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 7 7 7 Frame 1 Frame 2 Frame 3 Frame 4 # of Hits: # of Misses: 2 # of Hits: # of Misses: 2 # of Hits: # of Misses: 2

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 1 0 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 7 7 7 Frame 1 1 1 1 Frame 2 Frame 3 Frame 4 # of Hits: # of Misses: 3 # of Hits: # of Misses: 3 # of Hits: # of Misses: 3

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 2 1 0 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 2 7 7 Frame 1 1 1 1 Frame 2 2 2 Frame 3 Frame 4 # of Hits: # of Misses: 4 # of Hits: # of Misses: 4 # of Hits: # of Misses: 4

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 0 2 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 2 7 7 Frame 1 1 1 1 Frame 2 2 2 Frame 3 Frame 4 # of Hits: 1 # of Misses: 4 # of Hits: 1 # of Misses: 4 # of Hits: 1 # of Misses: 4

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 3 0 2 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 2 3 7 Frame 1 3 1 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 1 # of Misses: 5 # of Hits: 1 # of Misses: 5 # of Hits: 1 # of Misses: 5

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 0 3 2 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 2 3 7 Frame 1 3 1 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 2 # of Misses: 5 # of Hits: 2 # of Misses: 5 # of Hits: 2 # of Misses: 5

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 4 0 3 2 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 4 3 4 Frame 1 3 4 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 2 # of Misses: 6 # of Hits: 2 # of Misses: 6 # of Hits: 2 # of Misses: 6

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 2 4 0 3 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 4 3 4 Frame 1 2 4 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 2 # of Misses: 7 # of Hits: 3 # of Misses: 6 # of Hits: 3 # of Misses: 6

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 3 2 4 0 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 4 3 4 Frame 1 3 2 4 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 2 # of Misses: 8 # of Hits: 4 # of Misses: 6 # of Hits: 4 # of Misses: 6

Example 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace LRU Chain: 0 3 2 4 1 7 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Reference Trace Cache X (size = 3) Cache Y (size = 4) Cache Z (size = 5) Frame 0 3 4 Frame 1 3 2 4 1 Frame 2 2 2 Frame 3 3 Frame 4 # of Hits: 2 # of Misses: 9 # of Hits: 5 # of Misses: 6 # of Hits: 5 # of Misses: 6

Observation: The Stack Property Adding cache space never hurts, but it may or may not help This is referred to as the “Stack Property” LRU has the stack property, but not all replacement policies have it E.g., FIFO does not have it

Next Class Server-Side Replication