Download presentation
Presentation is loading. Please wait.
Published byMadison Boyd Modified over 9 years ago
2
© 2004 - Lindsay Bradford1 Scaling Dynamic Web Content Provision Using Elapsed-Time- Based Content Degradation Lindsay Bradford, Stephen Milliner and Marlon Dumas Contact: l.bradford@qut.edu.au Centre for Information Technology Innovation, Queensland University of Technology WISE 2004
3
© 2004 - Lindsay Bradford2 Overview Motivation Background Our Content Degradation Prototype Experiments and Results Ongoing and Future Work
4
© 2004 - Lindsay Bradford3 Motivation: Scalability Matters Users expect “service on demand” from the Web - Bhatti et.al Dynamic web content: On the increase – Barford et.al Much harder to scale than static content – Stading et.al Consider: Web Services (SOAP, WSDL, etc), easier to build clients with much smaller “think times”, greater strain on scalability.
5
© 2004 - Lindsay Bradford4 Scaling Dynamic Web Content Our focus is on dynamic web content scalability at a single server: Desire: Minimise TOC (Total Cost of Ownership). Single-Server Scalability = Bottleneck Reduction. Resource Bottleneck for: Static Web Content (HTML files, etc) = Bandwidth Dynamic Web Content (JSP, ASP, PHP, etc) = CPU, Stading, Padmanablan, Challenger. User-perceived QoS ≈ elapsed-time, not CPU.
6
© 2004 - Lindsay Bradford5 Background: Dynamic Content Degradation: Deliver degraded version of content under load. Aim: deliver “acceptable” version to an end user that is cheaper to deliver than original. Static web images – Abdelzaher and Bhatti, Multimedia - Chandra et al. Degrading content via a distributed, tiered network - Chen and Iyengar. As opposed to: Dynamic Web Content Caching: Has limited applicability. Resource Management: Can deny access to unlucky majority.
7
© 2004 - Lindsay Bradford6 Example:
8
© 2004 - Lindsay Bradford7 Guiding Heuristics: Pick content that will respond within human acceptable elapsed-time. (< 1 second, based on HCI research - Ramsay et.al, Bhatti et.al, Bouch et.al) Prefer more costly content to less costly where possible. Balance content generation time against target response time. Deliberately limit scope to “Application Programmer” perspective. No modification of supporting technologies (App Servers, etc). What could developers do right now? What limits exist?
9
© 2004 - Lindsay Bradford8 Our Content Degradation Prototype: A number of approaches to generating same web content (same URI). An Approach Selector to pick which approach is most appropriate. Two parameters: lower-time-limit and upper-time-limit. Parameters act as elapsed-time thresholds. Cross a threshold, choose a different approach. Slowest content generating approach called a baseline.
10
© 2004 - Lindsay Bradford9 Our Prototype(2): Contains memory of last n (20 for experiments) elapsed-times for response generation per approach. Longer the memory, the more pessimistic selector becomes about the future. “Current worst elapsed time” of an approach crossing one of the time-limit bounds triggers approach change.
11
© 2004 - Lindsay Bradford10 Our Prototype (3): Approach Selector Design
12
© 2004 - Lindsay Bradford11 Approach Selector Implementation: Unmodified Apache Tomcat 4.1.18 Approach Selector as a ``ServletFilter’’ Approaches as Servlets: 4 instances of a floating-point division servlet, configured to 100, 500, 1000 and 3000 loops. 4 instances of a memory-intensive lookup servlet with same loop numbers as above. 3000 loop servlets are baseline approaches.
13
© 2004 - Lindsay Bradford12 The Test Traffic Patterns: Response is adequate if <= 1 second elapsed-time between request sent & response received at client. –Steady – Server under constant load –Periodic – Server alternates between 1 minute of load and 1 minute of no requests.
14
© 2004 - Lindsay Bradford13 Periodic Pattern Latency:
15
© 2004 - Lindsay Bradford14 Periodic Pattern Throughput:
16
© 2004 - Lindsay Bradford15 Steady Pattern Latency:
17
© 2004 - Lindsay Bradford16 Steady Pattern Throughput:
18
© 2004 - Lindsay Bradford17 Periodic Limit Variance: Latency
19
© 2004 - Lindsay Bradford18 Periodic Limit Variance: Throughput
20
© 2004 - Lindsay Bradford19 Steady Limit Variance: Latency
21
© 2004 - Lindsay Bradford20 Steady Limit Variance: Throughput
22
© 2004 - Lindsay Bradford21 Results using Approach Selector: Benefit of Approach Selector outweighs overhead. Significant scalability recorded against our one second (user expected) response time target. Approach Selector Parameters: lower-time-limit matters. Algorithm is far less sensitive to upper-time-limit variations. Must pessimistically configure the Approach Selector to partially compensate for the elapsed- time missed by the Approach Selector.
23
© 2004 - Lindsay Bradford22 Future Work: Approach Selector New I/O (Database simulation) servlet Altering the Approach Selection Heuristic Automatic lower-time-limit variance. Automatic variance of remembered response times. Guidelines for automated service adaptation to request traffic.
24
© 2004 - Lindsay Bradford23 Future Work: Architecture Want to answer : “How can we minimise the overhead of the application development process WRT content degradation?” Avoid JIC (Just in case) degradation. Explore JIT (Just in time). Declarative approach, mark-up code that degrades with alternatives Manual, fine-grained behaviour alteration at run-time (exploring now – adaptable architecture based on generative communications). Servlet engine architecture (and specification) too limiting. We desire a more adaptive architecture: Ability to dynamically alter supporting architecture and its configuration (thread limits, etc) Ability to manually modify behaviour as load changes. Individual (running) object replacement; objects interacting that know nothing of each other.
25
© 2004 - Lindsay Bradford24 Finish. Questions? Comments?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.