Download presentation
Presentation is loading. Please wait.
1
SEDA – Staged Event-Driven Architecture
SEDA: An Architecture for well-conditioned scalable internet services. Matt Welsh, David Culler & Eric Brewer Presented by Rahul Agarwal Note keywords - UNDERLINED – will come to these
2
Overview Motivation Key Points Thread vs. Event concurrency SEDA
Experimental Evaluation My Evaluation Questions to Consider
3
Authors Main Author Culler & Brewer - Advisors
Matt Welsh’s PhD thesis at UC Berkeley Now Assistant Professor in CS Dept at Harvard Currently working on wireless sensor networks Research interests in OS, Networks and Large Scale Distributed Systems Culler & Brewer - Advisors Even written a book on Linux
4
Motivation High demand for web content – for example concurrent millions of users for Yahoo Load Spikes – “Slashdot Effect” Over provisioning support not feasible financially and events unannounced Web services getting more complex, requiring maximum server performance High demand – google, news sites etc Spikes – but not many users before or after a significant event, 9/11, Mars lander Complexity Services getting complex Logic changes rapidly Services hosted on general purpose machines rather than carefully engineered hardware
5
Motivation – Well-conditioned Service
A well-conditioned service behave like a simple pipeline As offered load increase throughput increases proportionally When saturated it does not degrade throughput – graceful degradation Want to provide a well-conditioned service
6
Pipelining Pipeline has stages Speedup, Have more stages – smaller size of stage Bottlenecks? – dryer too slow Dependency – cant fold until dried Dryer takes 60min, Washer 30min – implications, queues will form In case of SEDA resources can be reallocated, dryer can change to washer! How can we improve throughput in a pipeline? Potential problems?
7
Key Points Application as a network of event-driven stages connected by queues Dynamic Resource Controllers Automatic load tuning Thread pool sizing Event Batching Adaptive load shedding Support massive concurrency and prevent resources from being overcommitted Overview of SEDA Will discuss each point in detail
8
Thread-Based vs. Event-Based Concurrency
Thread-per-request OS switches and overlaps computation and I/O Synchronization required Performance Degradation Eg: RPC, RMI, DCOM Overcome by more control to OS Eg: SPIN, Exokernel, Nemesis Overcome by reuse of threads Eg: Apache, IIS, Netscape… everyone! Event One thread per CPU (controller) Process events generated by apps and kernel Each task implemented as a FSM with transitions triggered by events Pipelining! Eg: Flash, Zeus, JAWS Harder to modularize and engineer We are talking about internet services In this context…. THREAD Degradation by cache misses, scheduling overhead, lock contention due to too many threads More control to OS -> let OS see more Reuse of threads -> bounded thread pools .. BEA weblogic…IBM websphere, unfairness EVENT Stages Each stage has its own thread
9
Thread-Based vs. Event-Based Concurrency (Contd.)
FSM – Finite state machines Thread – context switch overhead, contended locks, throughput meltdown, increase in response time One thread per request vs. One active stage per request THESE ARE FROM INITIAL INVESTIGATION - Note change in throughput Note increase in latency of both – however at much larger number for event-based NOTE DIFFERENT SCALES ON AXIS
10
Thread-Based vs. Event-Based Concurrency (Contd.)
FSM – Finite state machines Thread – context switch overhead, contended locks, throughput meltdown, increase in response time One thread per request vs. One active stage per request THESE ARE FROM INITIAL INVESTIGATION - Note change in throughput Note increase in latency of both – however at much larger number for event-based NOTE DIFFERENT SCALES ON AXIS
11
SEDA FSM for a simple HTTP server request
Haboob server – some edges excluded…. Queues introduce explicit control boundaries Stages are non-blocking internally, may block between stages Tread pool not exposed to applications – controller for it Thread pool size – adjust depending on length of queue SEDA assumes message passing does not require specialized support Events processed – if next stage is slow adjust output rate (Batching control) SEDA prototype implementation known as Sandstorm
12
SEDA - Stage Each stage has its own thread pool
Controller adjusts resources Controller may set “admission control policy” Threshold, rate-control, load shedding Adjust thread pool size Adjust number of events processed
13
Overload Management Resource Containment Admission Control
Static method Usually set by admin Admission Control Parameter based Static or dynamically adjusted Control-theoretic approach Service Degradation Deliver lower fidelity Service Ways to control overload Resource containment – static Apache sets thread limit Unfair (Explain what fairness means) No response to client indicating what is going on NEXT SLIDE what we are interested in
14
SEDA Admission Control Policy
Using 90th percentile of set response time as performance metric Adaptive admission control Uses user classes (IPs, HTTP header info) Uses token bucket traffic shaper Possible to use drop-tail, FIFO, RED or other algorithms What we are interested in GOAL - Response time should be in above the 90th percentile Classes to prioritize certain users over others CHART: For Arashi service Load spike of 1000 users at t=70 and leaves at t=150 Note how admission rate increases at t=75 as then again at 140
15
Event-driven programming
Benefits Applications can map cleanly into modules Each stage self-contained Typically little/no data sharing Challenges Determining stages Stages can block Managing continuations between stages Tracking events
16
Software Contribution
Sandstorm: SEDA Framework NBIO: Non-blocking I/O implementation Haboob: Implementation of a web-server aTLS: library for SSL and TLS support All Java implementations, NBIO uses JNI Last updated July 2002 Consists of the following Sandstorm is basically the framework – implementing and connecting stages, queue management etc Atls – support for SSL+TLS NBIO – non blocking IO implementation, wasn’t supported in Java, now standard feature Haboob – Web server implementation – faster than apache Also an service called Arashi
17
Experimental Evaluation
Throughput vs. #Clients Haboob web-server Static and dynamic file load – SpecWEB99 benchmark 1 to 1024 clients Total files size 3.31Gb Memory Cache of 200Mb Industry standard benchmark 10% improvement in throughput
18
Throughput and Fairness vs. #Clients
Evaluation (Contd.) Throughput and Fairness vs. #Clients Jain’s Fairness Index Equality of services to all clients Suppose server can support 100 requests Totally fair if it processes 10 requests of each Unfair if say it processes 20 requests each for 5 users Jain Fairness index – treating all clients equally Show on board
19
Cumulative response time distribution for 1024 clients
Evaluation (Contd.) Cumulative response time distribution for 1024 clients cumulative response time distribution for Haboob, Apache, and Flash with 1024 clients. While Apache and Flash exhibit a high frequency of low response times, there is a heavy tail, with the maximum response time corresponding to several minutes. This is due to exponential backoff in the TCP SYN retransmit timer: Apache accepts only 150 connections, and Flash accepts only 506, despite 1024 clients requesting service. Note the log scale on the horizontal axis.
20
Evaluation (Contd.) Gnutella packet router
Non-traditional internet service, routing P2P packets Ping: Discover other nodes Query: Search for files being served Show non-traditional service Show dynamic thread pools
21
Summary Notion of stages Explicit event queues
Dynamic resource controllers + Improved performance + Well-conditioned services + Scalability Main ideas and advantages
22
My Evaluation Increased performance at higher loads which was the motivation Marginal increase in throughput but significantly more fair for higher loads Throughput does not degrade when resources exhausted Response times of other servers better at lower loads
23
My Evaluation In context of duality of OS structures (Lauer & Needham)
SEDA is message-oriented Message and Procedure oriented can be directly mapped, however independent of the application under consideration can performance indeed be similar? We will see how Capriccio does this – but no simple mapping!
24
Questions to consider SEDA is so good but the whole world (Apache, IIS, BEA, IBM, Netscape…) still uses thread-based servers? In real world scenarios how often are there load spikes, should goal be to increase average case performance instead? Is throughput or fairness a better metric? Being faster “despite” being in Java a bias or poor choice of language?
25
References Welsh, M., Culler, D., Brewer, E. (2001). SEDA: An architecture for well-conditioned, scalable internet services. Proceeding of 18th SOSP, Banff, Canada, October 2001 SEDA Project Welsh, M. (2002). An architecture for well-conditioned, scalable internet services. PhD Thesis, University of California, Berkeley Welsh, M. (2001) SEDA: An architecture for well-conditioned, scalable internet services. Presentation at 18th SOSP, Lake Louise, Canada, October 24, 2001 Welsh, M., Culler, D., Adaptive overload control for busy internet servers. Unpublished. Pipelining: Lauer, H. C. & Needham, R. M. (1978). On the duality of operating systems structures. In proceedings 2nd International Symposium on Operating Systems, IRIA, October 1978, reprinted in Operating Systems Review, 13, 2 April 1979, pp 3-19. Behren, R. et al. (2003). Capricco: Scalable Threads for Internet Services. Proc SOSP, New York, October 2003
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.