Download presentation
Presentation is loading. Please wait.
Published byElizabeth Green Modified over 9 years ago
1
A Dynamic Caching Mechanism for Hadoop using Memcached Gurmeet Singh Puneet Chandra Rashid Tahir University of Illinois at Urbana Champaign Presenter: Chang Dong
2
Outline 1.Memcached 2.Hadoop-Memcached
3
1 Memcached
4
What is memcached briefly? memcached is a high-performance, distributed memory object caching system, generic in nature It is a key-based cache daemon that stores data and objects wherever dedicated or spare RAM is available for very quick access It is a dumb distributed hash table. It does not provide redundancy, failover or authentication. If needed the client has to handle that.
5
Why was memcached made? It was originally developed by Danga Interactive to enhance the speed of LiveJournal.com It dropped the database load to almost nothing, yielding faster page load times for users, better resource utilization, and faster access to the databases on a memcache miss http://www.danga.com/memcached/
6
6 Memcached
7
Where does memcached reside? Memcache is not part of the database but sits outside it on the server(s). Over a pool of servers
8
Architecture
9
Why use memcached? To reduce the load on the database by caching data BEFORE it hits the database Can be used for more then just holding database results (objects) and improve the entire application response time Feel the need for speed – Memcache is in RAM - much faster then hitting the disk or the database
10
Why not use memcached? Memcache is held in RAM. This is a finite resource. Adding complexity to a system just for complexities sake is a waste. If the system can respond within the requirements without it - leave it alone
11
What are the limits of memcached? Keys can be no more then 250 characters Stored data can not exceed 1M (largest typical slab size) There are generally no limits to the number of nodes running memcache There are generally no limits the the amount of RAM used by memcache over all nodes – 32 bit machines do have a limit of 4GB though
12
Memcached
13
Memcached Distributed Architecture
14
2 Hadoop-Memcached
15
MapReduce Disk access latency 1. Jobs are scheduled on the same node that houses the associated data 2. data is replicated and placed in numerous ways to improve throughput and job completion times
16
RAMClouds based solely on main memory
17
Contribution Propose caching mechanism that finds a balance between the aforementioned approaches. Combine data replication and placement algorithms with a proactive fetching and caching mechanism based on Memcached.
18
Design
20
A. Two-Level Greedy Caching Receiver-Only greedy caching policy cache an object locally whenever a node needs it and it is unavailable in its local cache Sender-Only greedy caching policy cache an object whenever some other node requests for it and theobject is in the filesystem but not in the cache
21
Design B. Fetching a Cached Block Simultaneous-Requesting Memcached-First
22
Design C. Replacement at the Memcached Servers replaces the LRU entry from the hash table and informs the node that has cached the block.
23
Design D. Global Cache Replacement Policy N1 110 115 120 125 N2 50 55 60 65
24
Design Prefetching
25
Experiments and Results
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.