Anue Systems Inc1 v1.0 - 20050426 Telecommunications Industry AssociationTR-30.3/08-12-023 Lake Buena Vista, FL December 8 - 9, 2008.

Slides:



Advertisements
Similar presentations
McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. A PowerPoint Presentation Package to Accompany Applied Statistics.
Advertisements

Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
10/11/2014Anue Systems, Inc. 1 v Telecommunications Industry AssociationTR-30.3/ Lake Buena Vista, FL December.
Generating Random Numbers
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
Engineering Internet QoS
Lecture 3  A round up of the most important basics I haven’t covered yet.  A round up of some of the (many) things I am missing out of this course (ATM,
1 Updates on Backward Congestion Notification Davide Bergamasco Cisco Systems, Inc. IEEE 802 Plenary Meeting San Francisco, USA July.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
What's inside a router? We have yet to consider the switching function of a router - the actual transfer of datagrams from a router's incoming links to.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
#11 QUEUEING THEORY Systems Fall 2000 Instructor: Peter M. Hahn
Generalized Processing Sharing (GPS) Is work conserving Is a fluid model Service Guarantee –GPS discipline can provide an end-to-end bounded- delay service.
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
Statistics & Modeling By Yan Gao. Terms of measured data Terms used in describing data –For example: “mean of a dataset” –An objectively measurable quantity.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer.
Architecture for Network Hub in 2011 David Chinnery Ben Horowitz.
Simulation.
1 Active Probing for Available Bandwidth Estimation Sridhar Machiraju UC Berkeley OASIS Retreat, Jan 2005 Joint work with D.Veitch, F.Baccelli, A.Nucci,
Probability Distributions Random Variables: Finite and Continuous A review MAT174, Spring 2004.
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
7/3/2015© 2007 Raymond P. Jefferis III1 Queuing Systems.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Ch. 28 Q and A IS 333 Spring Q1 Q: What is network latency? 1.Changes in delay and duration of the changes 2.time required to transfer data across.
1 Chapters 9 Self-SimilarTraffic. Chapter 9 – Self-Similar Traffic 2 Introduction- Motivation Validity of the queuing models we have studied depends on.
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
Traffic Modeling.
1 Statistical Distribution Fitting Dr. Jason Merrick.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
10/24/2015Anue Systems, Inc. 1 v Telecommunications Industry AssociationTR-30.3/ Lake Buena Vista, FL December.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
V Telecommunications Industry AssociationTR-30.3/ Lake Buena Vista, FL December 8 - 9, 2008.
Interconnect simulation. Different levels for Evaluating an architecture Numerical models – Mathematic formulations to obtain performance characteristics.
Interconnect simulation. Different levels for Evaluating an architecture Numerical models – Mathematic formulations to obtain performance characteristics.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
V Telecommunications Industry AssociationTR-30.3/ Arlington, VA, March 30-31, 2009.
What is the Speed of the Internet? Internet Computing KUT Youn-Hee Han.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
CONGESTION CONTROL.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Efficient Gigabit Ethernet Switch Models for Large-Scale Simulation Dong (Kevin) Jin David Nicol Matthew Caesar University of Illinois.
Time-Dependent Dynamics in Networked Sensing and Control Justin R. Hartman Michael S. Branicky Vincenzo Liberatore.
Queuing Delay 1. Access Delay Some protocols require a sender to “gain access” to the channel –The channel is shared and some time is used trying to determine.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
2/16/2016 Subject Name: Digital Switching Systems Subject Code:10EC82 Prepared By: Aparna.P, Farha Kowser Department: Electronics and Communication Date:
3/18/2016Anue Systems, Inc. 1 v Telecommunications Industry AssociationTR-30.3/ Arlington, VA July ,
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
Doc.: IEEE /577r0 Submission July 2003 Qiang NI, Pierre Ansel, Thierry Turletti, INRIASlide 1 A Fair Scheduling Scheme for HCF Qiang Ni, Pierre.
Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Markov Models for data flow In Computer Networks Resource: Fayez Gebali, Analysis.
V Telecommunications Industry AssociationTR-30.3/ Lake Buena Vista, FL December 8 - 9, 2008.
Traffic Simulation L2 – Introduction to simulation Ing. Ondřej Přibyl, Ph.D.
Modeling and Simulation CS 313
Prepared by Lloyd R. Jaisingh
Modeling and Simulation CS 313
Empirically Characterizing the Buffer Behaviour of Real Devices
CONGESTION CONTROL.
Computer Science Division
Congestion Control, Quality of Service, & Internetworking
CSE 550 Computer Network Design
Chapter-5 Traffic Engineering.
Presentation transcript:

Anue Systems Inc1 v Telecommunications Industry AssociationTR-30.3/ Lake Buena Vista, FL December 8 - 9, 2008

Load, Delay and Packet Loss TIA TR 30.3 meeting December 8-9, 2008 Orlando, FL

TR 30.3 Meeting 12/8/2008 Goals Improve TIA-921-A Packet based Model Correctly emulates impairment with different bit rate, packet size and packet intervals. Can handle mixed traffic. Contrast with Time based model where the fixed packet size and packet interval must be specified. Cannot handle mixed traffic. Variable bit rate user data (e.g. MPEG4 video) Results depend on user packet sizes and when they arrive Bidirectional network model (asymmetric) Enforce bandwidth limits Better correspondence between high latency and lost packets Fewer “magic” numbers in the model Fewer test cases

TR 30.3 Meeting 12/8/2008 Goal: Understand this diagram. Disturbance load generator + Input user packets Output user packets Disturbance Packets Link Latency

TR 30.3 Meeting 12/8/2008 Review of our previous work Excellent proposal by Alan Clark to create a new model The model takes into account traffic characteristics Models typical TCP flow characteristics. We discussed models used for G.8261 Many similarities between TIA-921 and G.8261 But enough differences to warrant a new effort We discussed characteristics of disturbance loads Burstiness definitions Disturbance load probability density functions

TR 30.3 Meeting 12/8/2008 Modeling strategies Top down or bottom up Many models are hybrids. Top down models are constructed empirically Observe behavior of the system under various conditions Select parameter values that fits the observations The parameters may or may not be physical properties of the system Bottom up models are constructed analytically Analyze how an idealized network component should behave Parameters for these models are usually physical properties of the system For example: buffer size in a router or switch Test model components individually and compare the idealized results with actual measurements to verify. If there’s a discrepancy, the idealized model must be revised

TR 30.3 Meeting 12/8/2008 Modeling Strategies (cont.) TIA-921 and TIA-921-A A hybrid approach These models are mainly top down models today But have some bottom up characteristics Some effects (such as serialization delay) derive from actual physical characteristics of the links Other effects (such as core network behavior and link impulse probability) are empirically determined. Goal for TIA-921-B is to build on previous work Improved model should have more parts built bottom up.

TR 30.3 Meeting 12/8/2008 The ultimate bottom-up model We could build a discrete event simulation of every packet But it is too computationally intensive. It requires that we simulate both packets of interest (user packets) and disturbance load packets. A 24-hour simulation of a 10 hop network built out of GE switches and operating at 50% load with 1400 byte (avg) packets represents about 40 Billion packets. That’s half a petabit. If you watched HDTV for two years straight, without sleeping, it would use about that many bits. Therefore it is not possible to simulate everything. We must make some approximations.

TR 30.3 Meeting 12/8/2008 Strategy for simplification Create a statistical model of the disturbance load traffic Derive the delay and loss characteristics for different levels of disturbance loads. Fully simulate all of the user packets Use a statistical approximation of the disturbance load Alan indicated he had information on characterizing typical TCP flows. Last conference call, we talked about load PDFs Statistical models of the disturbance packets Result is a model that only needs to perform calculations when a user packet is received. Significant increase in performance and accuracy.

TR 30.3 Meeting 12/8/2008 Review: Burstiness Define as an off and on process Disturbance load generator is off or on Definitions: Nominal generator load is L nom While the generator is on, it creates a burst load L burst While the generator is off, it generates load of 0% The time that the generator is on is T burst (chosen randomly) The time that the generator is off is T gap Choose a linear mapping for burst load L Bmin – Load during burst when L nom = 0 L Bmax – Load during burst when L nom = 100%

TR 30.3 Meeting 12/8/2008 Review: Burstiness Equations

TR 30.3 Meeting 12/8/2008 Review: Burstiness L Bmin = 50%, L Bmax = 133%

TR 30.3 Meeting 12/8/2008 Review: Composite Load PDF Two CBR disturbance load generators One bursty gamma disturbance load generator

TR 30.3 Meeting 12/8/2008 Idealized wire rate ethernet switch/router Disturbance load generator + Input user packets Output user packets Disturbance Packets Link Latency Store & Forward Enqueue Delay Dequeue Delay Clock Crossing Link Bit Rate R (bits/sec) Fixed Latency Buffer Size F(bits)

TR 30.3 Meeting 12/8/2008 Idealized wire rate ethernet switch/router Delay for idealized switch has several factors Fixed latency constant (approx 500ns) Link latency Time of flight for the signal (photons or electrons) Store and Forward delay (serialization delay) depends on user packet size (receive packet size/receive link rate) Random Clock crossing delay uncertainty (small, dozens of ns) Enqueue and Dequeue latency (small, hundreds of ns) Queuing delay Load dependent This is the hard part

TR 30.3 Meeting 12/8/2008 Our example load PDF

TR 30.3 Meeting 12/8/2008 Delay and loss from disturbance load PDF Assume that the queue starts out empty We’ll revisit this assumption later, don’t worry A user packet of size S i arrives at time t i Define: Step 1: How much disturbance load arrived between t i-1 and t i ? t i-1 titi SiSi

TR 30.3 Meeting 12/8/2008 Disturbance Load CDF CDF is cumulative distribution function for PDF Is piecewise continuous, monotonically increasing and invertible

TR 30.3 Meeting 12/8/2008 Inverse CDF function (CDF -1 ) The inverse CDF function can help It is a mapping from uniform random numbers (easy to make) to random numbers of any distribution (hard to make) This mapping can be pre-computed and saved in memory (very fast!) Uniform Random Number

TR 30.3 Meeting 12/8/2008 Delay and loss from disturbance load PDF A user packet of size S i arrives at time t i Step 1: How much disturbance load arrived between t i-1 and t i ? Generate by mapping a uniform random number through CDF -1 Get a load percentage for t i : L i t i-1 titi SiSi

TR 30.3 Meeting 12/8/2008 Divide into three simpler sub-problems Non-congested Load is between 0 and L CONGEST (100%) Congested Load is between L CONGEST (100%) and L DROP Overloaded Load is more than L DROP We’ll define L DROP in a moment

TR 30.3 Meeting 12/8/2008 Three simpler sub-problems

TR 30.3 Meeting 12/8/2008 Simplest sub-problem: Overload Easiest sub-problem: the user packet gets dropped This happens if the disturbance load percentage is so high that the queue is completely full when the user packet arrives. Define L DROP as the load threshold above which a drop must occur. If L i >= L DROP then the user packet is dropped If F =16k bytes, R =100Mbit/second, and  =1ms, then L DRO P =231%

TR 30.3 Meeting 12/8/2008 Next simplest problem: non-congested If 0 < L i < 100% then the queue will contain at most one disturbance load packet when the user packet arrives. The user packet is not dropped in this case. Therefore it will be serviced immediately after any in-progress packet Delay depends on the size of the disturbance packet (known), and when the user packet arrives relative to the disturbance packet (random). Assume arrival times are uncorrelated -> uniform RV. Assume that the disturbance packets are E bits, which means that the maximum amount of time it has to wait ( δ ) is So for this case, the delay is a uniform random value: [0.. δ ]

TR 30.3 Meeting 12/8/2008 Non-congested case: Example δ=110.4us

TR 30.3 Meeting 12/8/2008 Non-Congested case: 50% TM2 TM2 has 30%-64Byte, 10%-576Byte, 60%-1518Byte So a 50% TM2 load is 15%,5%,30%. This PDV histogram is often said to resemble a church or cathedral

TR 30.3 Meeting 12/8/2008 Non-Congested case: Lab measurements See the characteristic “cathedral” shape of the PDV? Typical measured PDV

TR 30.3 Meeting 12/8/2008 Congested case Disturbance load 100% < L i < L DROP Seems hard (but actually turns out to be simple) We assumed that the buffer started out empty (remember, we’ll fix this in a few more slides) Conservation: Bits added to queue during  i : L i *  i * R Bits removed from queue during  i :  i * R The difference is the buffer fullness (G) And the delay is G i /R

TR 30.3 Meeting 12/8/2008 Adding memory We assumed that the queue always started out empty This essentially means that the time intervals (  ) are independent It is a reasonable approximation when the buffer size is much smaller than the number of bits serviced by the queue during . It’s usually a good approximation for high speed links But not for low speed links like TIA-921 Access links So, to fix that, save the queue state as a variable G i Need an equation to calculate G i+1 Change the threshold levels for L CONGEST and L DROP

TR 30.3 Meeting 12/8/2008 Add memory Adjust the thresholds L CONGEST and L DROP Update equation for G i (after calculation, G i+1 is limited to be less than or equal to F)

TR 30.3 Meeting 12/8/2008 Conclusion Statistically modeling the disturbance load allows more accurate implementation of TIA-921-B Supports Variable bit rate user data (e.g. MPEG4 video) Results depend on user packet sizes when they arrive Bidirectional network model (assymetric) Enforces bandwidth limits Better correspondence between high latency and lost packets Fewer “magic” numbers in the model

TR 30.3 Meeting 12/8/2008 Next Steps TBD