Download presentation
Presentation is loading. Please wait.
1
Cooperative Caching Middleware for Cluster-Based Servers Francisco Matias Cuenca-Acuna Thu D. Nguyen Panic Lab Department of Computer Science Rutgers University
2
Our work Goal –Provide a mechanism to co-manage memory of cluster-based servers –Deliver a generic solution that can be reused by Internet servers and file systems Motivation –Emerging Internet computing model based on infrastructure services like Google, Yahoo! and others » Being built on clusters: scalability, fault-tolerance –It’s hard to build efficient cluster-based servers »Dealing with distributed memory »If memories are used independently, servers only perform well when the working set fits on a single node
3
Previous solutions Request distribution based on load and data affinity Web Server FS Web Server FS Web Server FS Web Server FS Web Server FS Network A A Front end
4
Previous solutions Web Server FS Web Server FS Web Server FS Web Server FS Web Server FS Network A A Distributed front end Request distribution based on load and data affinity Round Robin req.distribution
5
Our approach Web Server FS Web Server FS Web Server FS Web Server FS Web Server FS Network A A Cooperative block caching and global block replacement Round Robin req distribution
6
Our approach Web Server FS Web Server FS Web Server FS Web Server FS Web Server FS Network Round Robin req distribution Other uses for our CC layer
7
Why cooperative caching and what do we give up? Advantages of our approach –Generality »By presenting a block-level abstraction »Can be used across very different applications such as web servers and file systems »Doesn’t need any application knowledge –Reusability »By presenting it as a generic middleware layer Disadvantages of our approach –Generality + no application knowledge possible performance loss –How much?
8
Our contributions Study carefully why cooperative caching, as designed for cooperative client caching to reduce server load, does not perform as well as content-aware request distribution When compared to a web server that uses content-aware request distribution –Lose 70% when using a traditional CC algorithm –Lose only 8% when using our adapted version (CCM) Adapt cooperative caching to better suit cluster based servers –Trade lower local hit rates for higher total hit rates (local + global)
9
Our cooperative caching algorithm (CCM) Files are distributed across all nodes –No replication –The node holding a file on disk is called the file’s home –Homes are responsible for tracking blocks in memory Master blocks and non-master blocks –There is only one master block for each block/file in memory –CCM only tracks master blocks Hint based block location –Algorithm based on Dahlin et. al (1994) –Nodes have approximate knowledge of block location and may have to follow a chain of nodes to get to it
10
Replacement mechanisms Each node maintains local LRU lists Exchange age hints when forwarding blocks –Piggyback age of oldest block Replacement –Victim is a local block: evict –Victim is a master block: »If oldest block in cluster according to age hints, evict »Otherwise, forward to peer with oldest block
11
Example of CCM at work b mc mpnf home Request b Forward b mc Request b b b
12
Assessing performance Compare a CCM-based web server against one that uses content-aware request distribution –L2S (HPDC 2000) »Efficient and scalable »Application-specific request distribution »Maintain global information »File based caching Event driven simulation –The same simulator used in L2S The platform we simulate is equivalent to: –1Gbps VIA LAN –Clusters of 4 & 8 nodes with single 800Mhz Pentium III –IDE hard drive on each node
13
Workload Four WWW traces: Drive server as fast as possible
14
Results Throughput for Clarknet on 8 nodes
15
Hit Rate Hit rate distribution on CCM-BasicHit rate distribution on CCM Total hit rate
16
Normalized throughput Throughput normalized versus L2S
17
Resource utilization CCM’s resource utilization
18
Scalability Throughput when running on varying cluster sizes
19
Further results Performance differences between CCM and L2S may be affected by: –L2S’s use of TCP hand-off –L2S’s assumption that files are replicated everywhere –Refer to paper for estimates of potential performance difference due to these factors Current work –Limit the amount of metadata maintained by CCM »To reduce memory usage »Discard outdated information –Lazy eviction and forwarding notification »On average finds a block with 1.1 hops (vs. 2.4) »10% response time decrease »2% throughput increase
20
Conclusions A generic block-based cooperative caching algorithm can efficiently co-manage cluster memory –CCM performs almost as well as a highly optimized content aware request distribution web server –CCM scales linearly with cluster size –Presenting a block-based solution to a file-based application only led to a small performance loss should work great for block-based applications CCM achieves high-performance by using a new replacement algorithm well-suited to a server environment –Trades off local hit rates and network bandwidth for increased total hit rates –Right trade-off given current network and disk technology trends
21
Future & related work Future Work –Investigate the importance of load-balancing –Provide support for writes –Validate simulation results with implementation Some Related Work –PRESS (PPoPP 2001) –L2S (HPDC 2000) –LARD (ASPLOS 1998) –Cooperative Caching (OSDI 1994) –Cluster-Based Scalable Network Services (SOSP 1997)
22
Thanks to Liviu Iftode Ricardo Bianchini Vinicio Carreras Xiaoyan Li Want more information? www.panic-lab.rutgers.edu
23
Extra slides – Simulation parameters
24
Extra slides – Response time Response time normalized versus L2S
25
Extra slides – Hops vs. hit rate Number of hops versus hit rate
26
Extra slides – Traces characteristics
27
Extra slides – Using location hints b mc mpnf home Request b Forward b mc Request b b b
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.