Presentation is loading. Please wait.

Presentation is loading. Please wait.

BD-Cache: Big Data Caching for Datacenters

Similar presentations


Presentation on theme: "BD-Cache: Big Data Caching for Datacenters"— Presentation transcript:

1 BD-Cache: Big Data Caching for Datacenters
Boston University* Northeastern University † Problem Implementation Initial Results In a multi-tenant data center, the network (a) between the compute clusters and shared storage, (b) among different racks of compute clusters may become a bottleneck. Big Data Frameworks such as Hadoop and Spark are common residents of these datacenters, most of the jobs are IO bound that can get impacted by potential network bottlenecks. Previous studies shows that Big Data frameworks have high input data reuse, uneven data popularity, sequential data access. Two level caching mechanism implemented by modifying the original CEPH Rados Gateway. L1-Cache and L2-cache are logically separated, they physically share the same physical cache infrastructure. “BD-Cache” supports read/write traffics but only cache on read operations, stores data in SSDs running on EXT4, reads and writes data asynchronously for better performance, understands Swift and S3, uses random replacement. CACHE MISS PERFORMANCE CACHING PREFETCHING Cache-RGW imposes no overhead Our Architecture CACHE HIT PERFORMANCE Cache Nodes are placed per rack and each holds high performance Intel NVMe-SSDs. Node Rack 1 L1 CACHE CACHE NODE 1 Rack 2 CACHE NODE 2 Rack N CACHE NODE N L2 CACHE Compute Cluster STORAGE CLUSTER Methodology Experimental configurations: Unmodified-RGW Cache-RGW Ceph cluster: 10 Lenova storage nodes, each has 9 HDDs 128 GB DRAM per node Cache node 2x 1.5TB Intel SSD 128 GB DRAM Requests: 4GB Files requested in parallel by curl command Caching improves the read performance significantly. Cache-RGW saturates SSD. Future Work Evaluate caching architecture by benchmarking real-world workloads. Prefetching Cache replacement algorithms Enable Caching on write operations Project Webpage: Github Repo for Cache-RGW Code: L1 Cache: Rack Local, thus reduces inter rack traffic among the cluster racks. L2 Cache: Distributed and shared among racks, therefore reduces traffic between the clusters and the back-end storage. Anycast Network Solution allows Nodes to access the nearest cache node, and if a cache node fails, the Nodes will be directed to a redundant cache node transparently.


Download ppt "BD-Cache: Big Data Caching for Datacenters"

Similar presentations


Ads by Google