Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cooperative Caching, Simplified

Similar presentations


Presentation on theme: "Cooperative Caching, Simplified"— Presentation transcript:

1 Cooperative Caching, Simplified
Austin Chen

2 What is Cooperative Caching?
Method for data read performance optimization on client-server architecture Builds off the fast speed of cache on local machines Combines the speed of individual local client caches to boost client read speeds across entire system

3 Quick Review of Memory Hierarchy

4 Quick Review of Memory Hierarchy
Disk reads can be time consuming When one client needs to read information from another client’s disk, it is slow Hint: there are ways to improve this

5 How it might work in a simple system
Non-cooperative caching systems might use a three-level memory hierarchy - Local Memory - Server Memory - Server Disk

6 The Cooperative Caching Method
Cooperative caching introduces a different client’s memory - Local Memory - Server Memory - Server Disk - Another client’s local memory

7 How is this all possible?
Processor speed has improved faster than disk performance Requires a high-speed, low-latency network that surpasses the speed of hard disk reads Fetching data from remote memory is ~3 times faster than getting data from a remote disk. With cooperative caching, remote memory can be accessed ten to twenty times as quickly as disk.

8 Four Cooperative Caching Algorithms
Direct Client Cooperation Greedy Forwarding Algorithm Centrally Coordinated Caching N-Chance Forwarding

9 Direct Client Cooperation
Simplest algorithm Once the local cache of a client fills and overflows, it forwards cache entries onto an idle machine’s cache The active client can now access the idle machine’s cache to satisfy read requests

10 Direct Client Cooperation

11 Direct Client Cooperation
Pros: Simple, can be implemented without overall server modification Cons: The client donating its cache MUST be idle. As far as the server is concerned, a client utilizing remote memory appears to have a temporarily enlarged cache. Con: If all clients are active, then no one computer can leverage cooperative caching.

12 Greedy Forwarding Algorithm
Each client manages it’s own cache “greedily”, meaning it does not have permissions to modify another client’s cache. If the client does not find a block in its local cache, it looks in the server cache. If not in server cache, then looks at client caches.

13 Greedy Forwarding Algorithm
Needs this block of data:

14 Greedy Forwarding Algorithm
Pros: Appealing because it’s “fair”. Clients need only worry about managing local resources. Cons: Lack of data coordination could lead to data duplication

15 Centrally Coordinated Caching
Attempts to lessen data duplication problem through coordinated caches Each client cache is statically partitioned into a locally managed section and a globally managed section by the server

16 Centrally Coordinated Caching
= Globally Managed Cache

17 Local vs. Global cache… how does it work?
Server manages the globally managed portion with a “global replacement algorithm” LRU Cache When the server evicts a block from its local cache to make room for data, it sends the evicted block to the LRU cache the global cache. The global cache will then boot off the oldest, least used block and to make room for the overflow.

18 Centrally Coordinated Caching
Pros: High global hit rate it can achieve through global management Cons: Each individual’s client has a reduced cache size. Local hit rate is reduced. Pros: Because it manages a global cache, it can cover a lot of ground and not create duplicate entries. Cons: Another drawback is that centrally coordinating a cache may impose significant load on the server.

19 N-Chance Forwarding Each client cache is now dynamically partitioned based on client activity N-Chance algorithm has the individual clients preferentially cache singlets Singlets are blocks only stored in one client’s cache

20 = Globally Managed Cache
N-Chance Forwarding = Globally Managed Cache

21 N-Chance Forwarding Pros: Optimizes central coordinated caching by taking into account client activity Cons: Can be prone to bouncing a block of data among multiple caches if the global cache is being resized. Extra server load.

22 Performance, compared

23 Why not just get more server RAM?
Server is less loaded overall because it handles network requests instead of large data disk lookups Cooperative cache systems are more cost effective It’s cheaper to add 16GB of RAM to 100 clients than add one big 1.6TB chunk of RAM to a single server


Download ppt "Cooperative Caching, Simplified"

Similar presentations


Ads by Google