Cooperative Caching, Simplified

Slides:



Advertisements
Similar presentations
1 Parallel Scientific Computing: Algorithms and Tools Lecture #2 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Advertisements

A KTEC Center of Excellence 1 Cooperative Caching for Chip Multiprocessors Jichuan Chang and Gurindar S. Sohi University of Wisconsin-Madison.
COS 461 Fall 1997 Workstation Clusters u replace big mainframe machines with a group of small cheap machines u get performance of big machines on the cost-curve.
High Performance Cluster Computing Architectures and Systems Hai Jin Internet and Cluster Computing Center.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
A Dynamic Caching Mechanism for Hadoop using Memcached Gurmeet Singh Puneet Chandra Rashid Tahir University of Illinois at Urbana Champaign Presenter:
PRASHANTHI NARAYAN NETTEM.
Network File Systems Victoria Krafft CS /4/05.
Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment.
Local Area Networks (LAN) are small networks, with a short distance for the cables to run, typically a room, a floor, or a building. - LANs are limited.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
Memory Hierarchy and Cache Memory Jennifer Tsay CS 147 Section 3 October 8, 2009.
Memory and cache CPU Memory I/O. CEG 320/52010: Memory and cache2 The Memory Hierarchy Registers Primary cache Secondary cache Main memory Magnetic disk.
Introduction to DFS. Distributed File Systems A file system whose clients, servers and storage devices are dispersed among the machines of a distributed.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
SYNAR Systems Networking and Architecture Group CMPT 886: Computer Architecture Primer Dr. Alexandra Fedorova School of Computing Science SFU.
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
Cache Memory By Tom Austin. What is cache memory? A cache is a collection of duplicate data, where the original data is expensive to fetch or compute.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Latency Reduction Techniques for Remote Memory Access in ANEMONE Mark Lewandowski Department of Computer Science Florida State University.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
SYNAR Systems Networking and Architecture Group CMPT 886: Computer Architecture Primer Dr. Alexandra Fedorova School of Computing Science SFU.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Computer Architecture Lecture 25 Fasih ur Rehman.
SQL IMPLEMENTATION & ADMINISTRATION Indexing & Views.
1 Computer System Overview Chapter 1. 2 Operating System Exploits the hardware resources of one or more processors Provides a set of services to system.
CS161 – Design and Architecture of Computer
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
A Dummies guide to computer memory
Memory Management.
Cache Memory.
Chapter 2 Memory and process management
Memory COMPUTER ARCHITECTURE
Memshare: a Dynamic Multi-tenant Key-value Cache
CS161 – Design and Architecture of Computer
Memory and cache CPU Memory I/O.
Ramya Kandasamy CS 147 Section 3
CSC 4250 Computer Architectures
How will execution time grow with SIZE?
Chapter 9 – Real Memory Organization and Management
Memory Management for Scalable Web Data Servers
CSI 400/500 Operating Systems Spring 2009
Data Structures and Algorithms
Memory and cache CPU Memory I/O.
Page Replacement.
CMPT 886: Computer Architecture Primer
ECE 445 – Computer Organization
Chapter 6 Memory System Design
Distributed File Systems
Distributed File Systems
Outline Announcements Lab2 Distributed File Systems 1/17/2019 COP5611.
Distributed File Systems
Contents Memory types & memory hierarchy Virtual memory (VM)
CSC3050 – Computer Architecture
Performance metrics for caches
Page Replacement FIFO, LIFO, LRU, NUR, Second chance
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
Database System Architectures
Distributed File Systems
Distributed File Systems
Presentation transcript:

Cooperative Caching, Simplified Austin Chen

What is Cooperative Caching? Method for data read performance optimization on client-server architecture Builds off the fast speed of cache on local machines Combines the speed of individual local client caches to boost client read speeds across entire system

Quick Review of Memory Hierarchy

Quick Review of Memory Hierarchy Disk reads can be time consuming When one client needs to read information from another client’s disk, it is slow Hint: there are ways to improve this

How it might work in a simple system Non-cooperative caching systems might use a three-level memory hierarchy - Local Memory - Server Memory - Server Disk

The Cooperative Caching Method Cooperative caching introduces a different client’s memory - Local Memory - Server Memory - Server Disk - Another client’s local memory

How is this all possible? Processor speed has improved faster than disk performance Requires a high-speed, low-latency network that surpasses the speed of hard disk reads Fetching data from remote memory is ~3 times faster than getting data from a remote disk. With cooperative caching, remote memory can be accessed ten to twenty times as quickly as disk.

Four Cooperative Caching Algorithms Direct Client Cooperation Greedy Forwarding Algorithm Centrally Coordinated Caching N-Chance Forwarding

Direct Client Cooperation Simplest algorithm Once the local cache of a client fills and overflows, it forwards cache entries onto an idle machine’s cache The active client can now access the idle machine’s cache to satisfy read requests

Direct Client Cooperation

Direct Client Cooperation Pros: Simple, can be implemented without overall server modification Cons: The client donating its cache MUST be idle. As far as the server is concerned, a client utilizing remote memory appears to have a temporarily enlarged cache. Con: If all clients are active, then no one computer can leverage cooperative caching.

Greedy Forwarding Algorithm Each client manages it’s own cache “greedily”, meaning it does not have permissions to modify another client’s cache. If the client does not find a block in its local cache, it looks in the server cache. If not in server cache, then looks at client caches.

Greedy Forwarding Algorithm Needs this block of data:

Greedy Forwarding Algorithm Pros: Appealing because it’s “fair”. Clients need only worry about managing local resources. Cons: Lack of data coordination could lead to data duplication

Centrally Coordinated Caching Attempts to lessen data duplication problem through coordinated caches Each client cache is statically partitioned into a locally managed section and a globally managed section by the server

Centrally Coordinated Caching = Globally Managed Cache

Local vs. Global cache… how does it work? Server manages the globally managed portion with a “global replacement algorithm” LRU Cache When the server evicts a block from its local cache to make room for data, it sends the evicted block to the LRU cache the global cache. The global cache will then boot off the oldest, least used block and to make room for the overflow.

Centrally Coordinated Caching Pros: High global hit rate it can achieve through global management Cons: Each individual’s client has a reduced cache size. Local hit rate is reduced. Pros: Because it manages a global cache, it can cover a lot of ground and not create duplicate entries. Cons: Another drawback is that centrally coordinating a cache may impose significant load on the server.

N-Chance Forwarding Each client cache is now dynamically partitioned based on client activity N-Chance algorithm has the individual clients preferentially cache singlets Singlets are blocks only stored in one client’s cache

= Globally Managed Cache N-Chance Forwarding = Globally Managed Cache

N-Chance Forwarding Pros: Optimizes central coordinated caching by taking into account client activity Cons: Can be prone to bouncing a block of data among multiple caches if the global cache is being resized. Extra server load.

Performance, compared

Why not just get more server RAM? Server is less loaded overall because it handles network requests instead of large data disk lookups Cooperative cache systems are more cost effective It’s cheaper to add 16GB of RAM to 100 clients than add one big 1.6TB chunk of RAM to a single server https://downloadmoreram.com/