Download presentation
Presentation is loading. Please wait.
1
A Network Measurement Architecture for Adaptive Networked Applications Mark Stemm* Randy H. Katz Computer Science Division University of California at Berkeley *FastForward Networks, Inc. San Francisco, CA Srinivasan Seshan IBM T.J. Watson Research Labs Hawthorne, NY
2
The Problem Predict in advance network performance (e.g., available bandwidth, latency, packet loss probability) to distant network sites Internet ???
3
Why Does It Matter? Apps that could use this: –Mirror site selection (Harvest, Mirrored FTP/Web sites) –Fidelity adaptation (CMU Odyssey, UCB TranSend, Low graphics WWW sites) –Network state feedback ? vs. Click here or here to download
4
Internet SPAND (Shared Passive Network Performance Discovery) Client Performance Server Shared: clients use past knowledge of nearby clients Passive, App-Specific: app-to-app traffic used to measure performance Gateway Local Admin Domain Data Perf. Reports Perf Query/ Response
5
Advantages of SPAND Shared Measurements –Clients pool individual measurements to obtain more up-to-date information Passive Measurements –No new measurement traffic introduced into network Application-Specific Measurements –More likely to measure what the client really cares about than network-level measurements
6
SPAND Architecture Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server
7
SPAND Architecture (Modified) Clients: –Make Performance Reports to Performance Servers –Send Performance Requests to Performance Servers Performance Servers: –Receive reports from clients, –Aggregate/post-process reports –Respond to requests with Performance Responses Packet Capture Host: –Snoops on local traffic, –Makes Performance Reports on behalf of unmodified clients
8
SPAND Architecture Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server
9
Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server SPAND Architecture
10
Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server
11
SPAND Architecture Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server
12
SPAND Architecture Client Packet Capture Host Client Data Perf. Reports Perf Query/ Response Internet Performance Server
13
LookingGlass: Web Mirror Selection 3 Sub-Problems: –Mechanisms for Mirror Advertisement How does client discover the mirrors? –Metrics for Mirror Ranking How does client determine “best” mirror? –Algorithm for Mirror Selection Given a ranking, how does client choose mirror? ? Problem: –Client choose among mirror locations replicating same content
14
Current Solution and Its Problems Mechanisms for Mirror Advertisements: –Manually created list of sites on web page –Admin must update list every time mirror is added/deleted Metrics for Mirror Ranking: –Usually none, perhaps hints such as geographic location Algorithm for Mirror Selection: –Currently made by user (pick “best”) –Often leads to hotspots (pick first in list)
15
Our Solution: LookingGlass Mechanisms for Mirror Advertisements: –Transparent way to notify clients of mirrors –Distributed algorithm to disseminate mirror information between mirror sites Metrics for Mirror Ranking: –SPAND’s Web Page download metric used to rank mirrors Algorithm for Mirror Selection: –Randomly select with weights in proportion to ranking
16
Experimental Methodology Trace analysis and implementation CMU Trace consists of series of rounds –In each, client downloads identical web object from Mars Rover mirror sites M1 M2 Mn 3 sec 4 sec 5 sec......
17
Experimental Methodology M1 M2 Mn 3 sec 4 sec 5 sec...... Best Possible LookingGlass For each round, compare (via ratio) min download time with download time for: –What LookingGlass would pick –Hop count “closest”-to-client mirror –Two hosts in same general geographic area as client –Randomly selected mirror –Primary mirror site (“select first in list”)
18
Results
19
Conclusions SPAND: an architecture that facilitates adaptive networked applications Advantages/challenges of design choices in SPAND LookingGlass: Web Mirror Selection tool that uses SPAND SPAND source code available: –http://www.cs.berkeley.edu/~stemm/spand/
20
Outline of Talk Design Choices –Advantages/Challenges of Shared, Passive, Applications-specific performance measurements SPAND Architecture –Components of SPAND –Component Interactions LookingGlass: Web Mirror Selection –Best case: 40x improvement in download time –Average case: 4x improvement
21
Key Design Choices in SPAND Measurements are Shared –Hosts share performance information via per-domain repository Measurements are Passive –Application-to-application traffic is used to measure network performance Measurements are Application-Specific –When possible, measure application response time, not bandwidth, latency, hop count, etc.
22
Challenges of SPAND 3 Components of Noise: –Net Noise: inherent variability in net state –Sharing Noise: inappropriate sharing of measurements between hosts (i.e., modem and LAN users) –Temporal Noise: using old out-of-date measurements to indicate current performance Shared measurements add Sharing Noise, Passive measurements add Temporal Noise Noise decreases confidence in results Noise 200 kbits/sec 100-400 kbits/sec +
23
Challenges of SPAND How much Net/Sharing/Temporal Noise? –Net Noise: order of mag (2-4x) differences –Sharing Noise: very small additional differences –Temporal Noise: small additional differences for short time scales (i.e., < 1 hour) Experiments in paper quantify in detail 200 kbits/sec Network Noise 100-400 kbits/sec + Temporal Noise + Sharing Noise +
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.