Download presentation
Presentation is loading. Please wait.
1
1 Deriving Traffic Demands for Operational IP Networks: Methodology and Experience Anja Feldmann*, Albert Greenberg, Carsten Lund, Nick Reingold, Jennifer Rexford, and Fred True Internet and Networking Systems Research Lab AT&T Labs-Research; Florham Park, NJ *University of Saarbruecken
2
2 Traffic Engineering For Operational IP Networks Improve user performance and network efficiency by tuning router configuration to the prevailing traffic demands. –Why? –Time Scale? AS 7018 (AT&T)* *synthetic loads customers or peers customers or peers backbone
3
3 Traffic Engineering Stack Topology of the ISP backbone –Connectivity and capacity of routers and links Traffic demands –Expected/offered load between points in the network Routing configuration –Tunable rules for selecting a path for each flow Performance objective –Balanced load, low latency, service level agreements … Optimization procedure –Given the topology and the traffic demands in an IP network, tune routes to optimize a particular performance objective
4
4 Traffic Demands How to model the traffic demands? –Know where the traffic is coming from and going to –Support what-if questions about topology and routing changes –Handle the large fraction of traffic crossing multiple domains How to populate the demand model? –Typical measurements show only the impact of traffic demands »Active probing of delay, loss, and throughput between hosts »Passive monitoring of link utilization and packet loss –Need network-wide direct measurements of traffic demands How to characterize the traffic dynamics? –User behavior, time-of-day effects, and new applications –Topology and routing changes within or outside your network
5
5 Outline Sound traffic model for traffic engineering of operational IP networks Methodology for populating the model Results Conclusions
6
6 Outline Sound traffic model for traffic engineering of operational IP networks –Point to Multipoint Model Methodology for populating the model Results Conclusions
7
7 Traffic Demands Big Internet Web Site User Site
8
8 Traffic Demands Interdomain Traffic Web Site User Site AS 1 AS 2 AS 3 AS 4 U AS 3, U What path will be taken between AS’s to get to the User site? Next: What path will be taken within an AS to get to the User site? AS 4, AS 3, U
9
9 Traffic Demands Web Site User Site Zoom in on one AS 200 110 10 110 300 25 75 50 300 IN OUT 3 OUT 2 OUT 1 110 Change in internal routing configuration changes flow exit point! 110
10
10 Point-to-Multipoint Demand Model Definition: V(in, {out}, t) –Entry link (in) –Set of possible exit links ({out}) –Time period (t) –Volume of traffic (V(in,{out},t)) Avoids the “coupling” problem with traditional point-to- point (input-link to output-link) models: Pt to Pt Demand Model Traffic Engineering Improved Routing Pt to Pt Demand Model Traffic Engineering Improved Routing
11
11 Outline Sound traffic model for traffic engineering of operational IP networks Methodology for populating the model –Ideal –Adapted to focus on interdomain traffic and to meet practical constraints in an operational, commercial IP network Results Conclusions
12
12 Ideal Measurement Methodology Measure traffic where it enters the network –Input link, destination address, # bytes, and time –Flow-level measurement (Cisco NetFlow) Determine where traffic can leave the network –Set of egress links associated with each network address (forwarding tables) Compute traffic demands –Associate each measurement with a set of egress links
13
13 Adapted Measurement Methodology Interdomain Focus A large fraction of the traffic is interdomain Interdomain traffic is easiest to capture –Large number of diverse access links to customers –Small number of high speed links to peers Practical solution –Flow level measurements at peering links (both directions!) –Reachability information from all routers
14
14 Inbound and Outbound Flows on Peering Links Peers Customers Inbound Outbound Note: Ideal methodology applies for inbound flows.
15
15 Most Challenging Part: Inferring Ingress Links for Outbound Flows Outbound traffic flow measured at peering link Customers destination output Use Routing simulation to trace back to the ingress links! ? input Example
16
16 Forwarding Tables Configuration Files NetFlowSNMP Computing the Demands Data –Large, diverse, lossy –Collected at slightly different, overlapping time intervals, across the network. –Subject to network and operational dynamics. Anomalies explained and fixed via understanding of these dynamics Algorithms, details and anecdotes in paper! NETWORK
17
17 Outline Sound traffic model for traffic engineering of operational IP networks Methodology for populating the model Results –Effectiveness of measurement methodology –Traffic characteristics Conclusions
18
18 Experience with Populating the Model Largely successful –98% of all traffic (bytes) associated with a set of egress links –95-99% of traffic consistent with an OSPF simulator Disambiguating outbound traffic –67% of traffic associated with a single ingress link –33% of traffic split across multiple ingress (typically, same city!) Inbound and transit traffic (uses input measurement) –Results are good Outbound traffic (uses input disambiguation) –Results may be good enough for traffic engineering, but there are limitations –To improve results, may want to measure at selected or sampled customer links; e.g., links to email, hosting or data centers.
19
19 Proportion of Traffic in Top Demands (Log Scale) Zipf-like distribution. Relatively small number of heavy demands dominate.
20
20 Time-of-Day Effects (San Francisco) Heavy demands at same site may show different time of day behavior midnight EST
21
21 Discussion Distribution of traffic volume across demands –Small number of heavy demands (Zipf’s Law!) –Optimize routing based on the heavy demands –Measure a small fraction of the traffic (sample) –Watch out for changes in load and egress links Time-of-day fluctuations in traffic volumes –U.S. business, U.S. residential, & International traffic –Depends on the time-of-day for human end-point(s) –Reoptimize the routes a few times a day (three?) Stability? –Yes and No
22
22 Outline Sound traffic model for traffic engineering of operational IP networks Methodology for populating the model Results Conclusions –Related work –Future work
23
23 Related Work Bigger picture –Topology/configuration (technical report) »“IP network configuration for traffic engineering” –Routing model (IEEE Network, March/April 2000) »“Traffic engineering for IP networks” –Route optimization (INFOCOM’00) »“Internet traffic engineering by optimizing OSPF weights” Populating point-to-point demand models –Direct observation of MPLS MIBs (GlobalCenter) –Inference from per-link statistics (Berkeley/Bell-Labs) –Direct observation via trajectory sampling (next talk!)
24
24 BRAVO (Backbone Routing, Analysis, Visualization, and Optimization) Data model –Physical level, IP level, router-complex level –Traffic demands, router attributes, link attributes Routing model –Shortest-path routing, OSPF tie-breaking –Multi-homed customers, inter-domain routing –Book-keeping to accumulate load on each link Visualization environment –Coloring/sizing to illustrate link and node statistics –Querying to subselect links and nodes –Histograms of statistics –What-if experiments with new routing configurations
25
25 Traffic Flow Through Backbone Color/size of node: proportional to traffic to this router ( high to low) Color/size of link: proportional to traffic carried ( high to low) Source node: public peering link in New York (Sprint NAP) Destination nodes: WorldNet access routers
26
26 Future Work Analysis of stability of the measured demands Online collection of topology, reachability, & traffic data Modeling the selection of the ingress link (e.g., use of multi-exit descriptors in BGP) Tuning BGP policies to the prevailing traffic demands Interactions of Traffic Engineering with other resource allocation schemes (TCP, overlay networks for content delivery)
27
27 Backup
28
28 AS 7018
29
29 Identifying Where the Traffic Can Leave Traffic flows –Each flow has a dest IP address (e.g., 12.34.156.5) –Each address belongs to a prefix (e.g., 12.34.156.0/24) Forwarding tables –Each router has a table to forward a packet to “next hop” –Forwarding table maps a prefix to a “next hop” link Process –Dump the forwarding table from each edge router –Identify entries where the “next hop” is an egress link –Identify set all egress links associated with a prefix
30
30 Measuring Only at Peering Links Why measure only at peering links? –Measurement support directly in the interface cards –Small number of routers (lower management overhead) –Less frequent changes/additions to the network –Smaller amount of measurement data Why is this enough? –Large majority of traffic is interdomain –Measurement enabled in both directions (in and out) –Inference of ingress links for traffic from customers
31
31 Full Classification of Traffic Types at Peering Links Peers Customers Internal Inbound Outbound Transit
32
32 Flows Leaving at Peer Links Single-hop transit –Flow enters and leaves the network at the same router –Keep the single flow record measured at ingress point Multi-hop transit –Flow measured twice as it enters and leaves the network –Avoid double counting by omitting second flow record –Discard flow record if source does not match a customer Outbound –Flow measured only as it leaves the network –Keep flow record if source address matches a customer –Identify ingress link(s) that could have sent the traffic
33
33 Results: Populating the Model Data Used
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.