Fast Servers Robert Grimm New York University Or: Religious Wars, part I, Events vs. Threads.

Slides:



Advertisements
Similar presentations
CS 443 Advanced OS Fabián E. Bustamante, Spring 2005 Cooperative Task Management without Manual Stack Management or Event-driven programming is not the.
Advertisements

IO-Lite: A Unified Buffering and Caching System By Pai, Druschel, and Zwaenepoel (1999) Presented by Justin Kliger for CS780: Advanced Techniques in Caching.
Flash: An efficient and portable Web server Authors: Vivek S. Pai, Peter Druschel, Willy Zwaenepoel Presented at the Usenix Technical Conference, June.
CS533 Concepts of Operating Systems Jonathan Walpole.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Chapter 5 Threads os5.
Chapter 5 Processes and Threads Copyright © 2008.
Jump to first page Flash An efficient and portable Web server presented by Andreas Anagnostatos CSE 291 Feb. 2, 2000 Vivek S. Pai Peter Druschel Willy.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of.
Capriccio: Scalable Threads for Internet Services ( by Behren, Condit, Zhou, Necula, Brewer ) Presented by Alex Sherman and Sarita Bafna.
490dp Synchronous vs. Asynchronous Invocation Robert Grimm.
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
Fast Servers Robert Grimm New York University Or: Religious Wars, part I, Events vs. Threads.
G Robert Grimm New York University Disco.
Capriccio: Scalable Threads for Internet Services Rob von Behren, Jeremy Condit, Feng Zhou, Geroge Necula and Eric Brewer University of California at Berkeley.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Based on Silberschatz, Galvin and Gagne  2009 Threads Definition and motivation Multithreading Models Threading Issues Examples.
3.5 Interprocess Communication
Threads CSCI 444/544 Operating Systems Fall 2008.
CS533 Concepts of Operating Systems Class 3 Integrated Task and Stack Management.
1 Chapter 4 Threads Threads: Resource ownership and execution.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
PRASHANTHI NARAYAN NETTEM.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Replay Debugging for Distributed Systems Dennis Geels, Gautam Altekar, Ion Stoica, Scott Shenker.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services by, Matt Welsh, David Culler, and Eric Brewer Computer Science Division University.
Cooperative Task Management without Manual Stack Management Or, Event-driven Programming is not the Opposite of Thread Programming Atul Adya, John Howell,
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
Flash An efficient and portable Web server. Today’s paper, FLASH Quite old (1999) Reading old papers gives us lessons We can see which solution among.
CS 153 Design of Operating Systems Spring 2015
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Multithreading Allows application to split itself into multiple “threads” of execution (“threads of execution”). OS support for creating threads, terminating.
LiNK: An Operating System Architecture for Network Processors Steve Muir, Jonathan Smith Princeton University, University of Pennsylvania
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Cooperative Task Management without Manual Stack Management or, Event-driven programming is Not the Opposite of Threaded Programming Atul Adya, Jon Howell,
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
SOFTWARE DESIGN AND ARCHITECTURE LECTURE 13. Review Shared Data Software Architectures – Black board Style architecture.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Department of Computer Science and Software Engineering
CSE 60641: Operating Systems Next topic: CPU (Process/threads/scheduling, synchronization and deadlocks) –Why threads are a bad idea (for most purposes).
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
1 Cooperative Task Management without Manual Stack Management or Event-driven Programming is not the Opposite of Threaded Programming Atul Adya, Jon Howell,
LAIO: Lazy Asynchronous I/O For Event Driven Servers Khaled Elmeleegy Alan L. Cox.
6.894: Distributed Operating System Engineering Lecturers: Frans Kaashoek Robert Morris
By: Rob von Behren, Jeremy Condit and Eric Brewer 2003 Presenter: Farnoosh MoshirFatemi Jan
SEDA An architecture for Well-Conditioned, scalable Internet Services Matt Welsh, David Culler, and Eric Brewer University of California, Berkeley Symposium.
Threads versus Events CSE451 Andrew Whitaker. This Class Threads vs. events is an ongoing debate  So, neat-and-tidy answers aren’t necessarily available.
An Efficient Threading Model to Boost Server Performance Anupam Chanda.
Atul Adya, Jon Howell, Marvin Theimer, William J. Bolosky, John R. Douceur Microsoft Research Presented by Poonam Singh CS 533 – Winter 2010.
CS533 Concepts of Operating Systems Jonathan Walpole.
Making the “Box” Transparent: System Call Performance as a First-class Result Yaoping Ruan, Vivek Pai Princeton University.
1 Why Threads are a Bad Idea (for most purposes) based on a presentation by John Ousterhout Sun Microsystems Laboratories Threads!
Cooperative Task Management without Manual Stack management Hanyun Tao EECS 582 – W161.
for Event Driven Servers
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Contents 1.Overview 2.Multithreading Model 3.Thread Libraries 4.Threading Issues 5.Operating-system Example 2 OS Lab Sun Suk Kim.
1 Chapter 5: Threads Overview Multithreading Models & Issues Read Chapter 5 pages
SEDA: An Architecture for Scalable, Well-Conditioned Internet Services
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Cooperative Task Management without Manual Stack Management or, Event-driven Programming is Not the Opposite of Threaded Programming Atul Adya, Jon Howell,
QNX Technology Overview
Presentation transcript:

Fast Servers Robert Grimm New York University Or: Religious Wars, part I, Events vs. Threads

Overview  Challenge  Make server go fast  Approach  Cache content in memory  Overlap I/O and processing  Some issues  Performance characteristics  Costs/benefits of optimizations & features  Programmability, portability, evolution

Server Architectures  Multi-Process (MP)  One request per process  Easily overlaps I/O and processing  No synchronization necessary  Multi-Threaded (MT)  One request per thread  Kernel/user threads?  Enables optimizations based on shared state  May introduce synchronization overhead

Server Architectures (cont.)  Single Process Event Driven (SPED)  Request processing broken into separate steps  Step processing initiated by application scheduler  In response to completed I/O  OS needs to provide support  Asynchronous read(), write(), select() for sockets  But typically not for disk I/O

Server Architectures (cont.)  Asymmetric Multi-Process Event Driven (AMPED)  Like SPED, but helpers handle disk I/O  Helpers invoked through pipes (IPC channel)  Helpers rely on mmap(), mincore()  Why?

Server Architectures (cont.)  Staged Event Driven (SEDA)  Targeted at higher-level runtimes (e.g., Java VM)  No explicit control over memory (e.g., GC)  Each stage is event driven, but uses its own threads to process events

Flash Implementation  Map pathname to file  Use pathname translation cache  Create response header  Use response header cache  Aligned to 32 byte boundaries  Why? writev()  Write response header (asynchronously)  Memory map file  Use cache of file chunks (with LRU replacement)  Write file contents (asynchronously)

Flash Helpers  Main process sends request over pipe  Helper accesses necessary pages  mincore()  Feedback-based heuristic  Second-guess OS  Helper notifies main process over pipe  Why pipes?  select() -able  How many helpers?  Enough to saturate disks

Costs and Benefits  Information gathering  MP: requires IPC  MT: requires consolidation or fine-grained synchronization  SPED, AMPED: no IPC, no synchronization  Application-level caching  MP: many caches  MT, SPED, AMPED: unified cache  Long-lived connections  MP: process  MT: thread  SPED, AMPED: connection information

Performance Expectations  In general  Cached  SPED, AMPED, Zeus > MT > MP, Apache  Disk-bound  AMPED > MT > MP, Apache >> SPED, Zeus  What if Zeus had as many processes as Flash helpers?  Cached: Worse than regular Zeus b/c of cache partitioning  Disk-bound: Same as MP

Experimental Methodology  6 servers  Flash, Flash-MT, Flash-MP, Flash-SPED  Zeus 1.30 (SPED), Apache (MP)  2 operating systems  Solaris 2.6, FreeBSD  2 types of workloads  Synthetic  Trace-based  1 type of hardware  333 MHz PII, 128 MB RAM, 100 MBit/s Ethernet

Single File Test  Repeatedly request the same file  Vary file size  Provides baseline  Servers can perform at their highest capacity

Solaris Single File Test

FreeBSD Single File Test

Single File Test Questions  How does Flash-* compare to Zeus?  Why is Apache slower?  Why does Flash-SPED outperform Flash?  Why do Flash-MT, Flash-MP lag?  Why does Zeus lag for files between 10 and 100KB on FreeBSD  Why no Flash-MT on FreeBSD?  Which OS would you chose?

Solaris Rice Server Trace Test  Measure throughput by replaying traces  What do we learn?

Real Workloads  Measure throughput by replaying traces  Vary data set size  Evaluate impact of caching

Solaris Real Workload

FreeBSD Real Workload

Real Workload Observations  Flash good on cached and disk-bound workloads  SPED a little better on cached b/c of cache test  SPED deteriorates on disk-bound workload  MP suffers from many smaller caches  Choice of OS matters  Flash better than MP on disk-bound  Fewer total processes

Flash Optimizations  Test effect of different optimizations  What do we learn?

WAN Conditions  Test effect of WAN conditions  Less bandwidth  Higher packet loss  What do we learn?

In Summary, Flash-MT or Flash?  Cynical  Don’t bother with Flash  Practical  Flash easier than kernel-level threads  Flash scales better than Flash-MT with many, long- lived connections  However:  What about read/write workloads?  What about SMP machines?

Do We Really Have to Chose?  Threads  Events

Remember SEDA…?  Staged Event Driven (SEDA)  Targeted at higher-level runtimes (e.g., Java VM)  Each stage is event driven, but uses its own threads to process events  Why would we want this?  What’s the problem?

Let’s Try Something Different…

Checking Our Vocabulary  Task management  Serial, preemptive, cooperative  Stack management  Automatic, manual  I/O management  Synchronous, asynchronous  Conflict management  With concurrency, need locks, semaphores, monitors  Data partitioning  Shared, task-specific

Separate Stack and Task Management!  Religious war conflates two orthogonal axes  Stack management  Task management

Automatic vs. Manual Stack Management In More Detail  Automatic  Each complete task a procedure/method/…  Task state stored on stack  Manual  Each step an event handler  Event handlers invoked by scheduler  Control flow expressed through continuations  Necessary state + next event handler  Scheme: call/cc reifies stack and control flow

call/cc in Action  (+ 1 (call/cc (lambda (k) (+ 2 (k 3)))))  Continuation reified by call/cc represents (+ 1 [])  When applying continuation on 3  Abort addition of 2  Evaluate (+ 1 3), resulting in 4  Thanks to Dorai Sitaram, Teach Yourself Scheme in Fixnum Days

call/cc in Action (cont.)  (define r #f) (+ 1 (call/cc (lambda (k) (set! r k) (+ 2 (k 3)))))  Not surprisingly, this also results in 4  (r 5)  Results in?

Manual Stack Management: Stack Ripping  As we add blocking calls to event-based code  Need to break procedures into event handlers  Issues  Procedure scoping  From one to many procedures  Automatic variables  From stack to heap  Control structures  Loops can get nasty (really?)  Debugging  Need to recover call stack

So, Why Bother with Manual Stacks?  Hidden assumptions become explicit  Concurrency  Static check: yielding, atomic  Dynamic check: startAtomic(), endAtomic(), yield()  Remote communications (RPC)  Take much longer, have different failure modes  Better performance, scalability  Easier to implement

Hybrid Approach  Cooperative task management  Avoid synchronization issues  Automatic stack management  For the software engineering wonks amongst us  Manual stack management  For “real men”

Implementation  Based on Win32 fibers  User-level, cooperative threads  Main fiber  Event scheduler  Event handlers  Auxiliary fibers  Blocking code  Macros to  Adapt between manual and automatic  Wrap I/O operations

Manual Calling Automatic  Set up continuation  Copy result  Invoke original continuation  Set up fiber  Switch to fiber  Issue: I/O  Are we really blocking?  No, we use asynchronous I/O and yield back to main fiber

Automatic Calling Manual  Set up special continuation  Test whether we actually switched fibers  If not, simply return  Invoke event handler  Return to main fiber  When done with task  Resume fiber

What Do We Learn?  Adaptors induce headaches…  Even the authors can’t get the examples right…  Sometimes caInfo, sometimes *caInfo, etc.  More seriously, implicit trade-off  Manual  Optimized continuations vs. stack ripping  Automatic  Larger continuations (stack-based) vs. more familiar programming model  Performance implications???

I Need Your Confession  Who has written event-based code?  What about user interfaces?  MacOS, Windows, Java Swing  Who has written multi-threaded code?  Who has used Scheme’s continuations?  What do you think?