Download presentation
Presentation is loading. Please wait.
Published byEileen Fletcher Modified over 9 years ago
1
NPS: A Non-interfering Web Prefetching System Ravi Kokku, Praveen Yalagandula, Arun Venkataramani, Mike Dahlin Laboratory for Advanced Systems Research Department of Computer Sciences University of Texas at Austin
2
September 19, 2015 Department of Computer Sciences, UT Austin 2 Summary of the Talk Prefetching should be done aggressively, but safely Safe: Non-interference with demand requests Contributions: A self-tuning architecture for web prefetching Aggressive when abundant spare resources Safe when scarce resources NPS: A prototype prefetching system Immediately deployable
3
September 19, 2015 Department of Computer Sciences, UT Austin 3 Outline Prefetch aggressively as well as safely Motivation Challenges/principles NPS system design Conclusion
4
September 19, 2015 Department of Computer Sciences, UT Austin 4 What is Web Prefetching? Speculatively fetch data that will be accessed in the future Typical prefetch mechanism [PM96, MC98, CZ01] Demand Requests Responses + Hint Lists Prefetch Requests ClientServer Prefetch Responses
5
September 19, 2015 Department of Computer Sciences, UT Austin 5 Why Web Prefetching? Benefits [GA93, GS95, PM96, KLM97, CB98, D99, FCL99, KD99, VYKSD01, …] Reduces response times seen by users Improves service availability Encouraging trends Numerous web applications getting deployed News, banking, shopping, e-mail… Technology is improving rapidly capacities and prices of disks and networks Prefetch Aggressively
6
September 19, 2015 Department of Computer Sciences, UT Austin 6 Why doesn’t everyone prefetch? Extra resources on servers, network and clients Interference with demand requests Two types of interference Self-Interference– Applications hurt themselves Cross-Interference– Applications hurt others Interference at various components Servers – Demand requests queued behind prefetch Networks – Demand packets queued or dropped Clients – Caches polluted by displacing more useful data
7
September 19, 2015 Department of Computer Sciences, UT Austin 7 Example: Server Interference Common load vs. response curve Constant-rate prefetching reduces server capacity 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 100200300400500600700800 Demand Connection Rate (conns/sec) Demand Pfrate=1 Pfrate=5 Avg. Demand Response Time (s) Prefetch Aggressively, BUT SAFELY
8
September 19, 2015 Department of Computer Sciences, UT Austin 8 Outline Prefetch aggressively as well as safely Motivation Challenges/principles Self-tuning Decoupling prediction from resource management End-to-end resource management NPS system design Conclusion
9
September 19, 2015 Department of Computer Sciences, UT Austin 9 Goal 1: Self-tuning System Proposed solutions use “magic numbers” Prefetch thresholds [D99, PM96, VYKSD01, …] Rate limiting [MC98, CB98] Limitations of manual tuning Difficult to determine “good” thresholds Good thresholds depend on spare resources “Good” threshold varies over time Sharp performance penalty when mistuned Principle 1: Self-tuning Prefetch according to spare resources Benefit: Simplifies application design
10
September 19, 2015 Department of Computer Sciences, UT Austin 10 Goal 2: Separation of Concerns Prefetching has two components Prediction – What all objects are beneficial to prefetch? Resource management –How many can we actually prefetch? Traditional techniques do not differentiate Prefetch if prob(access) > 25% Prefetch only top 10 important URLs Wrong Way! We lose the flexibility to adapt Principle 2: Decouple prediction from resource management Prediction: Application identifies all useful objects In the decreasing order of importance Resource management: Uses Principle 1 Aggressive – when abundant resources Safe – when no resources
11
September 19, 2015 Department of Computer Sciences, UT Austin 11 Goal 3: Deployability Ideal resource management vs. deployability Servers Ideal: OS scheduling of CPU, Memory, Disk… Problem: Complexity – N-Tier systems, Databases, … Networks Ideal: Use differentiated services/ router prioritization Problems: Every router should support it Clients Ideal: OS scheduling, transparent informed prefetching Problem: Millions of deployed browsers Principle 3: End-to-end resource management Server – External monitoring and control Network – TCP-Nice Client – Javascript tricks
12
September 19, 2015 Department of Computer Sciences, UT Austin 12 Outline Prefetch Aggressively as well as safely Motivation Principles for a prefetching system Self-tuning Decoupling prediction from resource management End-to-end resource management NPS prototype design Prefetching mechanism External monitoring TCP-Nice Evaluation Conclusion
13
September 19, 2015 Department of Computer Sciences, UT Austin 13 Prefetch Mechanism Client Prefetch Server Demand Server Munger Demand Requests Hint Lists Prefetch Requests Fileset Hint Server Server m/c 1. Munger adds Javascript to html pages 2. Client fetches html page 3. Javascript on html page fetches hint list 4. Javascript on html page prefetches objects
14
September 19, 2015 Department of Computer Sciences, UT Austin 14 End-to-end Monitoring and Control Principle: Low response times server not loaded Periodic probing for response times Estimation of spare resources (budget) at server – AIMD Distribution of budget Control the number. of clients allowed to prefetch Monitor Hint Server Demand Server Client GET http://repObj.html 200 OK… while(1) { getHint( ); prefetchHint( ); } if (budgetLeft) send(hints); else send(“return later”);
15
September 19, 2015 Department of Computer Sciences, UT Austin 15 Monitor Evaluation (1) End-to-end monitoring makes prefetching safe 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0100200300400500600700800 Avg Demand Response Time(sec) Demand Connection Rate (conns/sec) Monitor Manual tuning, Pfrate=1 Manual tuning, Pfrate=5 No-Prefetching
16
September 19, 2015 Department of Computer Sciences, UT Austin 16 Monitor Evaluation (2) Manual tuning is too damaging at high load 0 20 40 60 80 0100200300400500600700800 Bandwidth (Mbps) Demand Connection Rate (conns/sec) pfrate=1 Prefetch: Demand: pfrate=1 No-Prefetching
17
September 19, 2015 Department of Computer Sciences, UT Austin 17 Monitor Evaluation (2) Manual tuning too timid or too damaging End-to-end monitoring is both aggressive and safe 0 20 40 60 80 0100200300400500600700800 Bandwidth (Mbps) Demand Connection Rate (conns/sec) pfrate=1 Prefetch: Demand: pfrate=1 Demand:Monitor No-Prefetching Prefetch:Monitor
18
September 19, 2015 Department of Computer Sciences, UT Austin 18 Network Resource Management Demand and prefetch on separate connections Why is this required? HTTP/1.1 persistent connections In-order delivery of TCP So prefetch affects demand How to ensure separation? Prefetching on a separate server port How to use the prefetched objects? Javascript tricks – In the paper
19
September 19, 2015 Department of Computer Sciences, UT Austin 19 Network Resource Management Prefetch connections use TCP Nice TCP Nice A mechanism for background transfers End-to-end TCP congestion control Monitors RTTs and backs-off when congestion Previous study [OSDI 2002] Provably bounds self- and cross-interference Utilizes significant spare network capacity Server-side deployable
20
September 19, 2015 Department of Computer Sciences, UT Austin 20 Measure avg. response times for demand reqs. Compare with No-Prefetching and Hand-tuned Experimental setup End-to-end Evaluation PrefSvr Apache:8085 DemandSvr Apache: 80 Fileset HintSvr PPM predict Client httperf Network Cable modem, Abilene Trace IBM server
21
September 19, 2015 Department of Computer Sciences, UT Austin 21 Prefetching with Abundant Resources Both Hand-tuned and NPS give benefits Note: Hand-tuned is tuned to the best
22
September 19, 2015 Department of Computer Sciences, UT Austin 22 Tuning the No-Avoidance Case Hand-tuning takes effort NPS is self-tuning
23
September 19, 2015 Department of Computer Sciences, UT Austin 23 Prefetching with Scarce Resources Hand-tuned damages by 2-8x NPS causes little damage to demand
24
September 19, 2015 Department of Computer Sciences, UT Austin 24 Conclusions Prefetch aggressively, but safely Contributions A prefetching architecture Self-tuning Decouples prediction from resource management Deployable – few modifications to existing infrastructure Benefits Substantial improvements with abundant resources No damage with scarce resources NPS prototype http://www.cs.utexas.edu/~rkoku/RESEARCH/NPS/
25
September 19, 2015 Department of Computer Sciences, UT Austin 25 Thanks
26
September 19, 2015 Department of Computer Sciences, UT Austin 26 Prefetching with Abundant Resources Both Hand-tuned and NPS give benefits Note: Hand-tuned is tuned to the best
27
September 19, 2015 Department of Computer Sciences, UT Austin 27 Client Resource Management Resources – CPU, memory and disk caches Heuristics to control cache pollution Limit the space prefetch objects take Short expiration time for prefetched objects Mechanism to avoid CPU interference Start prefetching after all demand done Handles self-interference – more common case What about cross-interference? Client modifications might be necessary
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.