Download presentation
Presentation is loading. Please wait.
Published byLesley O’Brien’ Modified over 9 years ago
1
1 Web Server Performance in a WAN Environment Vincent W. Freeh Computer Science North Carolina State Vsevolod V. Panteleenko Computer Science & Engineering University of Notre Dame
2
2 Large web site Complex design and interaction Multiple tiers Appliance Web, app, & DB servers Study performance of web server Cached pages Most testing Simulated load LAN environment Our evaluation adds Simulated WAN environment Small MTU, BW limits, latency Shows some optimization aren’t
3
3 Evaluating a web server Three parts Measuring the server Loading the server Supporting the server Net Server load Server demand Tiers 2&3
4
4 Two ways to load server Synthetic load Controlled Reproducible Flexible Only as good as assumptions, mechanisms Hard to replicate real world Real-world load Uncontrolled Not reproducible (can use traces) Accurate model of system Hard to produce extreme or rare conditions Discussion Need both Validate simulations with real-world tests Net
5
5 Loading the server Our tests use synthetic load Three load-generating tools Micro-benchmarking tool Requests a single object at a constant rate Tests delivery of static, cached documents Establishes base line Net
6
6 Modified SURGE SURGE Scalable URL reference generator Barford & Crovella, U Boston Emulates statistical distribution Object & request size Object popularity Embedded object references Temporal locality Use idle periods Modifications Converted from process based to event based To increase number of clients Server-throttling problem eliminated Net
7
7 Delays and limits Emulate WAN parameters in a LAN Network delays Bandwidth limits Modified kernel and protocol stack Separate delay queue per TCP connection Necessary for accurate emulation More accurate than Dummynet & NISTnet (per interface) Net
8
8 Measuring a web server OS Network HTTP requestreply drivers TCP/IP Apache, TUX
9
9 Measuring a web server OS Network HTTP requestreply Measure utilization using HW performance counters
10
10 Test environment OS: Linux 2.4.8 Node: (server & clients) Pentium III, 650MHz 512MB main memory NIC: 3COM 3C590 100 Mbps ethernet Direct connect Software: Client: microbenchmarking, SURGE, delay/limits Server: Apache, Tux Warmed client No cache misses Client Server NIC
11
11 Cost breakdown – file size, Apache Majority of time in interrupt (recv’g) But most data is sent. MTU = 536 bytes Delay = 200 ms BW = 56 Kbps Data send rate = 3MB/s
12
12 Cost breakdown - file size, TUX Twice data send rate as Apache. Essentially all cost in interrupts. MTU = 536 bytes Delay = 200 ms BW = 56 Kbps Data send rate = 6 MB/s
13
13 Apache versus TUX ApacheTUX Server send rate3.0 MB/s6.0 MB/s Packets rec’d / s573811,991 Packets sent / s615611,878 Interrupts / s748213,974 Concurrent connections 7841451
14
14 Cost breakdown vs. MTU Surge parameters Size = 10 KB Delay = 200 ms BW = 56 Kbps Data send rate = 6 MB/s
15
15 Effects of network delay Surge parameters MTU = 536 bytes Size = 10 KB BW = 56 Kbps Data send rate = 6 MB/s
16
16 Effects of bandwidth limits Surge parameters MTU = 536 bytes Size = 10 KB Delay = 200 ms Data send rate = 6 MB/s 20% decrease in overhead from 28kbps to infinity
17
17 Persistent connections Surge parameters MTU = 536 bytes Size = 10 KB Delay = 200 ms Size = 10 KB Data send rate = 6 MB/s 10% decrease going from 1 to 16 requests per connection
18
18 Copy and checksumming Surge parameters MTU = 536 bytes Size = 10 KB Delay = 200 ms Size = 10 KB Data send rate = 6 MB/s
19
19 Re-assess value of some optimizations Copy & checksumming avoidance LAN: 25-111% copy or 21-33% copy & 10-15% checksum WAN: 10% combined Select optimization LAN: 28% WAN: < 10% Connection open/close avoidance (HTTP 1.1) LAN: “greatly”, “significantly” WAN: < 10%
20
20 Conclusion Most processing in protocol stack and drivers Small MTU size increases processing cost Little effect from Network delay Bandwidth limitations Persistent connections End-user request latency depends Primarily on connection bandwidth Secondarily on network delay Future Dynamic & uncached pages Add packet loss Work supported by IBM UPP & NSF CCR9876073 www.csc.ncsu.edu/faculty/freeh/
21
21 End
22
22 Persistent connections - packets/s
23
23 Number of Packets vs. MTU
24
24 Web (HTTP) servers Apache Largest install base User space Process-based model TUX Niche server Kernel space Event-based model Aggressive optimizations Copy/checksum avoidance Object, name caching
25
25 Measuring a web server OS Network HTTP requestreply
26
26 Interrupt coalescing Decreases interrupt scheduling overhead Interrupt every 2 ms
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.