Download presentation
Presentation is loading. Please wait.
Published byBertram Skinner Modified over 8 years ago
1
Difference of Degradation Schemes among Operating Systems -Experimental analysis for web application servers- Hideaki Hibino*(Tokyo Tech) Kenichi Kourai (Tokyo Tech) Shigeru Chiba (Tokyo Tech)
2
June 28-July 1, 2005DSN'05 workshop2 Web app. server Provides multiple services E.g. online shopping site Login pages Search pages Purchase pages Under heavy workload, each performance is degraded differently. workload performance workload performance gracefullysteeply
3
June 28-July 1, 2005DSN'05 workshop3 Degradation scheme (DS) How the performance slow down under heavy workload Depends on the OS workload Performance Good Bad
4
June 28-July 1, 2005DSN'05 workshop4 Experiment Two services on a single server Light-weight service (30 clients) Heavy-weight service (0 – 40 clients) Clients Generate requests to both two services Change workload by the number of clients Specs. Tomcat 5.0.25 Solaris 9, Linux 2.6.7/2.6.5/2.4.18, FreeBSD 5.2.1, or Windows 2003 Server Xeon 3.06GHz x 2, 2GB mem, 1Gbps NIC
5
June 28-July 1, 2005DSN'05 workshop5 Throughput on four OSes Light-weight serviceHeavy-weight service # of heavy service clients The throughput on Solaris is degraded gracefully The throughput on Windows steeply decreased The throughput on Linux and FreeBSD are similar The throughput on Solaris is bad.
6
June 28-July 1, 2005DSN'05 workshop6 Throughput on three Linux # of heavy service clients Behavior of this two versions are similar. Throughput of 2.6.7 is improved Throughput of 2.6.5 is good
7
June 28-July 1, 2005DSN'05 workshop7 Question What makes this difference? The degradation scheme seriously depends on the operating system. For commercial systems, this is a real problem. Do we need two machines for light-weight and heavy-weigh services? Often Unacceptable
8
June 28-July 1, 2005DSN'05 workshop8 Solaris v.s. Linux We found major factors of differences Thread implementation Kernel scheduler Following slides Explain in further detail Analyze only the light-weight service Workload 30 clients for light-weight service 20 clients for heavy-weight service
9
June 28-July 1, 2005DSN'05 workshop9 Execution time Per request Threads are mostly in the waiting state! Running time is approx. 4ms on both. Solaris 140 ms. 389 ms. Linux etc (21 ms) etc (4 ms) Poll (19ms) lock (348 ms) lock (136 ms)
10
June 28-July 1, 2005DSN'05 workshop10 Why waiting so long on Linux? (1) Implementation of threads Linux issues more system calls for lock acqusition SolarisLinux The waiting time per system call (ms) 64.665.2 The average num. of calls1.32.9 The total waiting time (ms)82.0189 System calls while processing a request
11
June 28-July 1, 2005DSN'05 workshop11 Why less system calls on Solaris? 1/3 JVM 1.4.2 It calls mutex_trylock() before mutex_lock(). If the try fails, JVM issues the mutex_lock system call and is blocked mutex_t lock; … if (mutex_trylock(&lock) != 0) mutex_lock(&lock); …
12
June 28-July 1, 2005DSN'05 workshop12 Why less system calls on Solaris? 2/3 The thread library It uses the adaptive lock. Combining spin lock and mutex_lock(). Only when failing during spinning, the thread is blocked mutex_t lock; … if (spin_lock(&lock) != 0) mutex_lock(&lock); …
13
June 28-July 1, 2005DSN'05 workshop13 Why less system calls on Solaris? 3/3 Solaris provides cond_wait system call Linux ’ s pthread_cond_wait() is implemented by four system calls. int pthread_cont_wait ( … ) { mutex_lock(&cond); mutex_unlock(&mutex); do { mutex_unlock(&cond); futex_wait(&futex); mutex_lock(&cond); } while ( … ) ; mutex_unlock(&cond); mutex_lock(&mutex); }
14
June 28-July 1, 2005DSN'05 workshop14 Any other reason? A thread stays in the pool for long time. 53 msec on Solaris 154 msec on Linux 3. process a request thread pool 1. accept a connection 2. wake up 4. enter the pool 13 threads
15
June 28-July 1, 2005DSN'05 workshop15 Behavior of threads Connection handling is sequential Threads are sequentially woken up Threads share CPU time Connection handling is preempted by threads processing requests Thread-1 Thread-2 Thread-3 in thread pool wake up Request processing Connection handling is preempted!
16
June 28-July 1, 2005DSN'05 workshop16 Why waiting so long on Linux? (2) Kernel-level scheduler is inappropriate Time slice of heavy-weight service Much longer on Linux Connection handling Preempted by heavy- weight service Waiting time in the thread pool becomes long
17
June 28-July 1, 2005DSN'05 workshop17 Thread priority Solaris Variable Frequently preempted Time slice becomes short Linux Almost constant Less frequently preempted Time slice becomes long Causes the longer time slice
18
June 28-July 1, 2005DSN'05 workshop18 Experiment Shorter time slice on Linux Modify the Linux kernel source The maximum time slice 200msec2msec The minimum time slice 10msec1msec
19
June 28-July 1, 2005DSN'05 workshop19 Results : Impact on waiting time Breakdown in thread poolWaiting time
20
June 28-July 1, 2005DSN'05 workshop20 Results : Impact on DS Light-weight serviceHeavy-weight service Behavior on Linux approaches to that on Solaris.
21
June 28-July 1, 2005DSN'05 workshop21 Concluding remarks Degradation scheme of a web application server depends on the underlying OSes Major factor: the waiting time for acquiring locks Thread implementation Linux issues more system calls Thread library, JVM Kernel scheduler The management of thread priority in the CPU scheduler The CPU preemption occurred less frequently in Linux
22
June 28-July 1, 2005DSN'05 workshop22 Related work Behavior of web server under heavy workload The bottleneck is I/O processing by the kernel[Almeida ’ 96] Different workload causes different bottleneck[Pradhan ’ 02] Behavior of web application server under heavy workload DB[McWherter ’ 04]
23
June 28-July 1, 2005DSN'05 workshop23 Future direction Middleware-level mechanism for controlling the DS Operating system independent Use of Aspect-oriented programming Dynamically injecting sleep code Yield a part of CPU time Related work Change of scheduling in OS Gray-Box[Andrea ’ 01] Infokernel[Andrea ’ 03]
24
June 28-July 1, 2005DSN'05 workshop24 The end Thank you for your attention!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.