Download presentation
Presentation is loading. Please wait.
Published byBrianna Perry Modified over 9 years ago
1
Computer Science 1 Adaptive Overload Control for Busy Internet Servers Matt Welsh and David Culler USITS 2003 Presented by: Bhuvan Urgaonkar
2
Computer Science 2 Internet Services Today r Massive concurrency demands m Yahoo: 1.2 billion+ pageviews/day m AOL web caches: 10 billion hits/day r Load spikes are inevitable m Peak load is orders of magnitude greater than average m Traffic on September 11, 2001 overloaded many news sites m Load spikes occur exactly when the service is most valuable! In this regime, overprovisioning is infeasible r Increasingly dynamic m Days of the “static” web are over m Majority of services based on dynamic content E-commerce, stock trading, driving directions etc.
3
Computer Science 3 Problem Statement r Supporting massive concurrency is hard m Threads/processes don’t scale very well r Static resource containment is inflexible m How to set a priori resource limits for widely varying loads? m Load management demands a feedback loop r Replication alone does not solve the load management problem m Individual nodes may still face huge variations in demand
4
Computer Science 4 Proposal: The Staged Event-Driven Architecture r SEDA: A new architecture for Internet services m A general-purpose framework for high concurrency and load conditioning m Decomposes applications into stages separated by queues r Enable load conditioning m Event queues allow inspection of request streams m Can perform prioritization or filtering during heavy load m Apply control for graceful degradation Perform load shedding or degrade service under overload
5
Computer Science 5 Staged Event-Driven Architecture r Decompose service into stages separated by queues m Each stage performs a subset of request processing m Stages internally event-driven, typically nonblocking m Queues introduce execution boundary for isolation r Each stage contains a thread pool to drive stage execution m Dynamic control grows/shrinks thread pools with demand
6
Computer Science 6 Per-stage admission control r Admission control done at each stage r Failure to enqueue a request => backpressure on preceding stages r Application has flexibility to respond as appropriate r Less conservative than single AC
7
Computer Science 7 Response time controller r 90 th percentile response time over some interval passed to the controller r AIMD heuristic used to determine token bucket rate r Exact scheduling mechanisms unspecified r Future work: Automatic tuning of parameters
8
Computer Science 8 Overload management r Class based differentiation m Segragate request processing for each class into its own set of stages m Or, have a common set of stages but make the admission controller aware of the classes r Service degradation m SEDA signals occurrence of overload to applications m If application wants it may degrade service
9
Computer Science 9 Arashi: A SEDA-based email service A web-based email service o Managing folders, deleting/refiling mails, search etc Client workload emulates several simultaneous users, user behavior derived from traces of the UCB CS IMAP server
10
Computer Science 10 Controller operation
11
Computer Science 11 Overload control with increased user load
12
Computer Science 12 Increased user load (contd)
13
Computer Science 13 Overload control under a massive load spike
14
Computer Science 14 Per-stage AC Vs Single AC
15
Computer Science 15 Advantages of SEDA r Exposure of the request stream m Request level performance made available to application r Focused, application-specific admission control m Fine-grained admission control at each stage m Application can provide own admission control policy r Modularity and performance isolation m Inter-stage communication via event passing enables code modularity
16
Computer Science 16 Shortcomings r Biggest shortcoming: Heuristic based m May work for some applications, fail for others r Not completely self-managed m Response time targets supplied by administrator m Controller parameters set manually r Limited to apps based on the SEDA approach r Evaluation of overheads missing r Exact scheduling mechanisms missing
17
Computer Science 17 Some thoughts/directions… r Formal ways to reason about the goodness of resource management policies m Also, the distinction between transient and drastic/persistent overloads r Policy issues: Revenue maximization and predictable application performance m Designing Service Level Agreements m Mechanisms to implement them r Application modeling and workload prediction
18
Computer Science 18 Overload control: a big picture Unavoidable overload Avoidable overload Underload Detection of overloads Formal and rigorous ways of defining the goodness of “self-managing” techniques UO and AO involve different actions (e.g. admission control versus reallocation). Are they fundamentally different?
19
Computer Science 19 Knowing where you are! r Distinguish avoidable overloads from unavoidable overloads m Need accurate application models, workload predictors r Challenges: multi-tiered applications, multiple resources, dynamically changing appl behavior r Simple models based on networks of queues? m How good would they prove? Resource allocations Workload (predicted) Performance Goal MODEL
20
Computer Science 20 Workload prediction: a simple example r A static application model m Find cpu and nw usage distributions by offline profiling m Use the 99 th percentiles as cpu, nw requirements r When the application runs “for real” m We don’t get to see what the tail would have been m So … resort to some prediction techniques m E.g., a web server: record # requests N record # requests serviced M extrapolate to predict the cpu, nw requirements of N requests
21
Computer Science 21 Service-level agreements r We may want… Workload Response time w1 r1 w2 r2 … wN rN r Is this possible to achieve? Maybe not. r How about: Response timeRevenue/request r1$$1 … rN$$N
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.