Download presentation
Presentation is loading. Please wait.
Published byPhilip May Modified over 9 years ago
1
Improving On-demand Data Access Efficiency with Cooperative Caching in MANETs Phd Dissertation Defense 11.21.05@CSE.ASU Yu Du Chair:Dr. Sandeep Gupta Committee:Dr. Partha Dasgupta Dr. Arunabha Sen Dr. Guoliang Xue Supported in part by NSF grants ANI-0123980, ANI-0196156, and ANI- 0086020, and Consortium for Embedded Systems.
2
2 Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 6. Conclusions and future works
3
3 1.1 Problems of data access in MANETs MANETs – Mobile Ad hoc Networks –Wireless medium –Multi-hop routes –Dynamic topologies –Resource constraints On-demand data access – client/server model. 1. Introduction
4
4 1.2. Reducing data access costs in MANETs The locality principle [Denning] –Computer programs tend to repeat referencing a subset of data/instructions. –Used in processor caches, storage hierarchies, Web browsers, and search engines. Zipf’s law [Zipf] –P(i) ∝ 1/i α (αclose to unity), common interests in popular data. –80-20 rule: 80% data accesses happen on 20% data. Cooperative caching –Multiple nodes share and cooperatively manage their cached contents. 1. Introduction
5
5 1.3. Cooperative caching Cooperative caching –A caching node not only serves its own data requests but also the requests from others. –A caching node not only stores data for its own needs but also for others. –Shorter path, less expensive links, less conflictions, lower risks of route breakage. –save time, energy, and bandwidth consumption as well as improves data availability. Why? –Data locality and commonality in users’ interests. –Client/Server communication Vs. inter-cache communication. –Users around the same location tend to have similar interests. People gathered around the food court: menus. Exploration team: environmental information. 1. Introduction AshleyBob To remote server
6
6 Roadmap 1. Introduction 2. Cooperative caching 2.1. Overview 2.2. Cache resolution 2.3. Cache management 2.4. Cache consistency control 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 6. Conclusions and future works
7
7 2.1 Overview Cooperative caching –Multiple nodes share and cooperatively manage their cached contents. –Cache resolution –Cache management –Cache consistency control Used in Webcache/Proxy servers on Internet. –To alleviate server overloading and response delay. –Did not consider special features of MANETs. 2. Cooperative caching
8
8 2.2 Cache resolution How to find a cache storing the requested data? HierarchicalDirectory-basedHash table based Caching nodeData items Node 1 Node 2 … Item1 Item2 … 1 2 3 4 5 Node 1 Node 2Node 3 Harvest [Chank96] Summary [Fan00] Squirrel [Lyer02] 2. Cooperative caching
9
9 2.3 Cache management What to cache? –Admission control. –Cache replacement algorithm. LRU Extended LRU (Squirrel) –any access has same impact, whether it is from the local node or other nodes. 2. Cooperative caching
10
10 2.4 Cache consistency control How to maintain the consistency between server and cache? –Strong/Weak consistency: whether consistency is always guaranteed. –Pull/Push-based: who (client/server) initiates the consistency verification. TTL is used in this research. –Each data item has a Time-To-Live field – allowed caching time. –TTL is popularly adopted in real applications – HTTP. –Lower cost than strong-consistency protocols. Pull-basedPush-based WeakTTLSynchronous Invalidation StrongLeaseAsynchronous Invalidation 2. Cooperative caching
11
11 3. Related works SchemesCache ResolutionCache management Consistency control Network model Harvest [Chank96] HierarchicallyNo specificationTTLWAN Summary [Fan00] Directory-basedLRUTTLWAN Squirrel [Lyer02] Hash-basedExtended LRUTTLLAN Cao04 [Cao04] CacheData, CachePath, HybridCache LRUTTLMANET
12
12 Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 4.1. System architecture 4.2. Cache resolution 4.3. Cache management 5. Performance evaluation 6. Conclusions and future works
13
13 4.1 System architecture Each node runs a COOP instance. The running COOP instance –Receives data requests from user’s applications. –Resolves requests using the cocktail cache resolution scheme. –Decides what data to cache using COOP cache management scheme. –Uses the underlying protocol stack. 4. Proposed approach – COOP
14
14 4.2. Cache Resolution 4.2.1. Hop-by-Hop 4.2.2. Zone-based 4.2.3. Profile-based 4.2.4. COOP cache resolution – a cocktail approach 4. Proposed approach – COOP
15
15 4.2.1 Hop-by-Hop cache resolution The forwarding nodes try to resolve a data request before relaying it to the next hop. Reduces the travel distance of requests/replies. Helps to avoid expensive/unreliable network channels. 4. Proposed approach – COOP, 4.2 Cache resolution
16
16 4.2.2 Zone-based cache resolution Users around the same location tend to share common interests. Cooperation zone – the surrounding nodes within r-hop range. –r: the radius of the cooperation zone To find an item within the cooperation zone –Reactive approach – flooding within the cooperation zone. –Proactive approach – record previous heard requests. 4. Proposed approach – COOP, 4.2 Cache resolution
17
17 4.2.3 Profile-based cache resolution Records received request to assist future cache resolution –RRT – Recent Request Table. –Entry is deleted when if the recorded requester fails to reply the corresponding data item. –When the table is full, use LRU to decide replacement. RequesterTimeRequested Data ID 192.168.0.1115:26:59:08:16:2005D1 192.168.0.1515:25:59:08:16:2005D2 192.168.0.1815:20:59:08:16:2005D3 4. Proposed approach – COOP, 4.2 Cache resolution
18
18 4.2.4 COOP cache resolution – a cocktail approach 4. Proposed approach – COOP, 4.2 Cache resolution
19
19 4.3. Cache Management 4.3.1. Primary and secondary data 4.3.2. Inter-category and intra-category rules 4. Proposed approach – COOP
20
20 4.3.1. Primary and secondary data Different cache misses may introduce different costs. –Example: cache miss cost for X is higher than cache miss cost for Y. Primary data and secondary data. –Primary data – not available within cooperation zone. –Secondary data – available within cooperation zone. Data Server X can be obtained from a neighbor. Y has to be obtained from the server. 4. Proposed approach – COOP, 4.3 Cache management
21
21 4.3.2. Inter-category and intra-category rules Inter-category rule –when replacement decision is to be made between different categories. –Primary data have precedence over secondary data Intra-category rule –when replacement decision is to be made within the same category. –LRU Example: A1 – A5 (Primary); B1 – B6 (Secondary) T0T0 T4T4 T3T3 T2T2 T1T1 4. Proposed approach – COOP, 4.3 Cache management
22
22 Roadmap 1. Introduction 2. Cooperative caching 3. Related works 4. Proposed approach – COOP 5. Performance evaluation 5.1. The impact of different zone radius 5.2. The impact of data access pattern 5.3. The impact of cache size 5.4. Data availability 5.5. Time cost: average travel distance 5.6. Cache miss ratio 5.7. Energy cost: message overhead 6. Conclusions and future works
23
23 5.1 The impact of different zone radius (1) Average probability of finding a requested item d within the zone. (2) Average time cost –assuming time cost is proportional to the number of covered hops (3) Average energy cost –assuming time cost is proportional to the number of messages. (1) (2) (3) PdPd average probability of a node caches d. ρthe average node density. Lthe distance (hops) between the requesting node and the server. rthe cooperation zone radius. 5. Performance evaluation
24
24 5.1 The impact of different zone radius 5. Performance evaluation If an item is not found within a certain size cooperation zone, it is unlikely to find it within a larger size zone. The saturation point.
25
25 5.2 The impact of access pattern α ++ Cache miss ratio - - CT-3, CT-2, CT-1, HBH, SC Average travel distance - - CT-3, CT-2, CT-1, HBH, SC Average #messages - - HBH CT-1, SC CT-2, CT-3 5. Performance evaluation
26
26 5.3 The impact of cache size Cache size ++ Cache miss ratio - - CT-3, CT-2, CT-1, HBH, SC Average travel distance - - CT-3 CT-2, CT-1, HBH, SC Average #messages - - HBH CT-1, SC CT-2, CT-3 5. Performance evaluation
27
27 5.4 Data availability Varied factors node number pause time node velocity Data availability CT-2, CT-1, HBH, SC 5. Performance evaluation
28
28 5.5 Time cost: average travel distance Varied factors node number pause time node velocity Average travel distance CT-2, CT-1, HBH, SC 5. Performance evaluation
29
29 5.6 Cache miss ratio Varied factors node number pause time node velocity Cache miss ratio CT-2, CT-1, HBH, SC 5. Performance evaluation
30
30 5.7 Energy cost: average #messages Varied factors node number pause time node velocity Average #messages CT-1 HBH, SC, CT-2 5. Performance evaluation
31
31 6. Conclusions and future works Cooperative caching is supported by data locality and the commonality in users’ interests. Proposed approach – COOP –Higher data availability –Less time cost –Smaller cache miss ratio –The tradeoff is message overhead –Tradeoff is dependent the cooperation zone radius. Future works –Adapt cooperation zone radius based on user’s requirements. –Explore different cooperation structure. –Enforce fairness in cooperative caching.
32
32 References [Cao04] L. Yin and G. Cao, “Supporting cooperative caching in ad hoc networks”, INFOCOM, 2004. [Chank96] A. Chankhunthod et al. “A Hierarchical internet object cache”, USENIX Annual Technical Conference, 1996. [Denning] P. Denning, “The locality principle”, Communications of the ACM, July 2005. [Fan00] L. Fan et al. “Summary cache: A scalable wide-area web cache sharing protocol”, Sigcomm, 1998. [Lyer02] S. Lyer et al. “Squirrel: A decentralized peer-to-peer web cache”, PODC, 2002. [Zipf] G. Zipf, “Human behavior and the principle of least effort”, Addison-Wesley, 1949.
33
33 Q & A Thank You!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.