Network Performance modelling and simulation Chapter 2 Network Performance Metric.

Slides:



Advertisements
Similar presentations
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Advertisements

CS 408 Computer Networks Congestion Control (from Chapter 05)
1 Fall 2005 Network Characteristics: Ownership, Service Paradigm, Performance Qutaibah Malluhi CSE Department Qatar University.
Metrics for Performance Evaluation Nelson Fonseca State University of Campinas.
Networking Theory (Part 1). Introduction Overview of the basic concepts of networking Also discusses essential topics of networking theory.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
CS533 Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2)
Ch. 28 Q and A IS 333 Spring Q1 Q: What is network latency? 1.Changes in delay and duration of the changes 2.time required to transfer data across.
CS332 Ch. 28 Spring 2014 Victor Norman. Access delay vs. Queuing Delay Q: What is the difference between access delay and queuing delay? A: I think the.
Switching Techniques Student: Blidaru Catalina Elena.
These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
William Stallings Data and Computer Communications 7 th Edition Chapter 1 Data Communications and Networks Overview.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
LECTURE 12 NET301 11/19/2015Lect NETWORK PERFORMANCE measures of service quality of a telecommunications product as seen by the customer Can.
© 2006 Cisco Systems, Inc. All rights reserved. 3.2: Implementing QoS.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
Protocols and layering Network protocols and software Layered protocol suites The OSI 7 layer model Common network design issues and solutions.
Youngstown State University Cisco Regional Academy
Congestion Control in Data Networks and Internets
Instructor Materials Chapter 6: Quality of Service
Chapter 9 Optimizing Network Performance
OPERATING SYSTEMS CS 3502 Fall 2017
Team: Aaron Sproul Patrick Hamilton
Topics discussed in this section:
1.2 The Network Edge Beginning at the edge of a network we are looking at the components with which we are most familiar - namely, the computers that we.
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Copyright 2010 Cisco Press & Priscilla Oppenheimer.
Network Performance and Quality of Service
Vocabulary Prototype: A preliminary sketch of an idea or model for something new. It’s the original drawing from which something real might be built or.
Empirically Characterizing the Buffer Behaviour of Real Devices
Understand the OSI Model Part 2
Vocabulary Prototype: A preliminary sketch of an idea or model for something new. It’s the original drawing from which something real might be built or.
Understanding the OSI Reference Model
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Services of DLL Framing Link access Reliable delivery
Chapter 16: Distributed System Structures
Congestion Control, Internet transport protocols: udp
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
CONGESTION CONTROL.
Transport Layer Unit 5.
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
High Throughput Route Selection in Multi-Rate Ad Hoc Wireless Networks
Process-to-Process Delivery:
Data and Computer Communications
IT351: Mobile & Wireless Computing
Congestion Control (from Chapter 05)
PRESENTATION COMPUTER NETWORKS
Switching Techniques.
Net301 LECTURE 10 11/19/2015 Lect
Congestion Control (from Chapter 05)
Ch 15 Network Characteristics
Performance Evaluation of Computer Networks
Congestion Control (from Chapter 05)
Performance Evaluation of Computer Networks
Network Performance Definitions
Optical communications & networking - an Overview
Requirements Definition
Congestion Control (from Chapter 05)
CSE 550 Computer Network Design
Chapter-5 Traffic Engineering.
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Process-to-Process Delivery: UDP, TCP
Congestion Control (from Chapter 05)
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Presentation transcript:

Network Performance modelling and simulation Chapter 2 Network Performance Metric

CONTENTS 1.1 Typical Performance Measures 1.2 Types of Service Guarantees 1.3 Type of Performance Measurements 1.4 Measurements computing system and networks 1.5 Why we need network Simulation 1.6 Goals of Performance Evaluation & Meta Goals 1.7 Types of communications networks, modeling constructs 1.8 Performance targets for simulation purposes 1.9 Systems: Performance Evaluation View 1.10 System Classifications 1.11 Case Study

1.1 Typical Performance Measures Ultimately, users are interested in their applications to run with “satisfactorily” or even “good” performance “Good performance” is subjective and application- dependent: —Frame rate / level of detail / resolution of an online-game, —Sound / speech quality, —Network bandwidth and latency, etc

1.1 Typical Performance Measures Objective measures (we only deal with these): —Can be measured —Typically expressed as numerical values —Other persons reproducing the experiment would obtain (nearly) the same values Subjective measures: —Are influenced by individual judging, e.g. speech quality, video quality —Can sometimes be “objectified”; example: specified a method for judging the output of audio codecs, the model aggregates the results of numerous listening tests. —Other person would give different level.

1.1 Types of Service Guarantees A service is provided by a service provider upon receiving service requests and under certain conditions For example, an ISP might grant Internet access when you: —Pay your monthly fee —Behave according to specified policies (do not send spam mails, do not offend other netizens, etc.) —Obey certain technical standards (right modem, dial-in numbers, etc.) —And no serious breakdown of network infrastructure happens

1.2 Types of Service Guarantees Given that these conditions are fulfilled, the provider might give one of the following promises: —Guaranteed quality of service (QoS): —Service provider claims that a certain service level will be provided under any circumstances —Anything worse is a contract violation —Sometimes additional requirements may be posed, e.g.: to guarantee a certain end-to-end delay in a network, you must not exceed some given sending rate; example: –An ISDN B-channel guarantees 64 kbit/sec; anything in excess is dropped

1.2 Types of Service Guarantees Expected QoS or statistical QoS : —Service provider promises “more or less” the service. —Problem: how to specify this exactly? examples: —At most x out of N consecutive data packets will get lost —At most x / N * 100% of data packets will get lost —The probability that a packet is lost is x / N —The first two examples make the difference between short term service and long term service guarantees: —1st case: “at most 1 packet out of 100”. —2nd case: “at most packets out of ”. Best effort service: —“I will do my very best but i promise nothing”

1.3 Type of Performance Measurements System-oriented vs. application-oriented measures: —System-oriented measures are independent of specific applications —Application-oriented measures might depend in complex ways on system-oriented measures Example – video conferencing over Internet: —Application-oriented measures: frame rate, resolution, color depth, SNR, absence of distortions, turnaround times, lip synchronization —System-/network-oriented measures: throughput, delay, jitter, losses, blocking probabilities,.. Often there is no simple way to predict application measures from system-oriented measures: —A video conferencing tool might be aware of packet losses and can provide mechanisms to conceal them, providing the user with slightly degraded video quality instead of dropouts or black boxes.

1.4 Measures for Different Types of Computing Systems Desktop systems (single user): —Response times, graphics performance (e.g. level of detail) Server systems (multiple users): —Throughput (processor, I/O bandwidth) —Reliability (MTBF, mean time between failures) —Availability (fraction of downtime per year / unit time) —Utilization Embedded systems: —Energy consumption —Memory consumption —Real-time performance: –Number of deadline misses –Jitter for periodic tasks –Interrupt latencies —System utilization

1.4 Measures for Communication Networks Delay: —Processing and queuing delays in network elements —Medium access delay in MAC protocols Jitter: delay variation Throughput: number of requests / packets which go through the network per unit time Goodput: similar to throughput, but without overhead (e.g. control packets, retransmissions) Loss rate: fraction of packets which are lost or erroneous Utilization: fraction of time a communications link / resource is busy Blocking probability: probability of getting no (connection-oriented) service; sometimes you get no line when you pick up the phone Dropping probability : in cellular systems; probability that an ongoing call gets lost upon handover

1.5 Why we need network Simulation Recently, design and management of Network systems are becoming an increasingly challenging task. Why??? Traditional static calculations are no longer reasonable approaches for validating the implementation of a new network because of stochastic nature of network traffic and the complexity of the overall system. Poor network performance may have serious impacts on the successful operation of organization businesses. Network designers increasingly rely on methods that help them evaluate several design proposals before the final decision is made and the actual systems is built.

1.5 Why we need network Simulation Network designers typically set the following objectives: —Performance modeling: Obtain statistics for various performance parameters of links, routers, switches, buffers, response time, etc. —Failure analysis: Analyze the impacts of network element failures. —Network design: Compare statistics about alternative network designs to evaluate the requirements of alternative design proposals. —Network resource planning: Measure the impact of changes on the network's performance, such as addition of new users, new applications, or new network elements.

1.6 Goals of Performance Evaluation General Goals: —Determine certain performance measures for existing systems or for models of (existing or future) systems. —Develop new analytical and methodological foundations, e.g. in queuing theory, simulation etc. —Find ways to apply theoretical approaches in creating and evaluating performance models. Typical specific goals —Bottleneck analysis and optimization —Comparison of alternative systems / protocols / algorithms —Capacity planning —Contract validation —Performance analysis is often a tool in investment decisions or mandated by other economic reasons

1.6 Meta Goals The methods, workloads, performance measures etc. should be relevant and objective; examples: —To determine the maximum network throughput it is appropriate to use a high load instead of a (typical) low load. —To test your system under errors you have to force these errors. Important Notes —Results should be communicated in a clear, concise and understandable manner to the persons setting the goals (often suits :o) —The performance assessment should be fair and careful, e.g. when comparing OUR system to THEIRs —The results should be complete, e.g. you should not restrict to a single workload favoring your system —  The results should be reproducible: Often not easy, e.g., measurements in wireless systems –

1.7 Types of communications networks, modeling constructs A communications network consists of network elements, nodes (senders and receivers) and connecting communications media. Among several criteria for classifying networks we use two: transmission technology and scale. Transmission technology classify networks as broadcast and point-to-point networks The scale or distance also determines the technique used in a network: wireline or wireless, and the physical area coverage (LAN, MAN, WAN, etc.)

1.8 Performance targets for simulation purposes list of network attributes that have a profound effect on the network performance and are usual targets of network modeling. These attributes are the goals of the statistical analysis, design, and optimization of computer networks. — Link capacity, or Channel: is the number of messages per unit time handled by a link. It is usually measured in bits per second. — Bandwidth: is the difference between the highest and lowest frequencies available for network signals. — Response time: is the time it takes a network system to react to a certain source's input. — Latency: is the amount of time it takes for a unit of data to be transmitted across a network link.

1.8 Performance targets for simulation purposes —Routing protocols: A route may traverse through multiple links with different capacities, latencies, and reliabilities. The objective of the routing protocols is to find an optimal or near optimal route between source and destination avoiding congestions. —Traffic engineering: Implies the use of mechanisms to avoid congestion by allocating network resources optimally, rather than continually increasing network capacities. —Protocol overhead: Protocol messages and application data are embedded inside the protocol data units, such as frames, packets, and cells. —Burstiness: The most dangerous cause of network congestion is the burstiness of the network traffic. —Frame size: large frames because they can fill up routers buffers much faster than smaller frames resulting in lost frames and retransmissions. —Dropped packet rate: The rate of dropping packets at the lower layers determines the rate of retransmitting packets at the transport layer.

1.9 Systems: Performance Evaluation View Features of a system for performance evaluation purposes

1.9 Elements of a System: Input Workload: specifies the arrival of requests which the system is supposed to serve; examples: —Arrival of packets to a communication network —Arrival of programs to a computer system —Arrival of instructions to a processor —Arrival of read/write requests to a database Workload characteristics: —Request type (e.g. TCP packet vs. UDP packet vs.... ) —Request size / service time / resource consumption (e.g. packet lengths) —Inter-arrival times of requests —(statistical) dependence between requests

1.9 Elements of a System: Input Configuration or parameters: in general all inputs influencing the systems operation; examples: —Maximum # of retransmissions in ARQ schemes —Time slice length in a multitasking operating system not all of these can be (easily) controlled —Factors: a subset of the parameters, which are purposely varied during a performance study to assess their influence Error model: specifies the types and frequencies of failures of system components or communication channels: —Persistent vs. transient errors: —Component failures are often persistent —Channel errors are often transient —System malfunctioning vs. malicious behavior caused by an adversary —Occurrence of “chain reactions” or “alarm storms”

1.9 Elements of a System: Others The system generates an output, some parts of which are presented to the user —It can also have an internal state, which determines its operations together with the input —There could be feedback: some parts of the output serve as input —To obtain desired performance measures, the output or the observable system state may have to be processed further, by some “function” f

1.10 Classifications of Systems There are a number of classifications of systems [1] Static vs. dynamic systems: —In a static system the output depends only on the current input, but not on past inputs or the current time —A dynamic system might depend on older inputs (“memory”) or on the current time (a system needs internal state to have memory). Time-varying vs. time-invariant systems: —Time-invariant: the output might depend on the current and past inputs, but not on the current time —In a time-varying system this restriction is removed Open systems vs. closed systems: —Open systems have an “outside world” which is not controllable and which might generate workloads, failures, or changes in configuration —In a closed system everything is under control

1.10 Classifications of Systems Stochastic systems vs. deterministic systems: —In a stochastic system at least one part of the input or internal state is a random variable / random process  outputs are also random. —Almost all “real” systems are stochastic systems because of –“true” randomness, or –because the system is so complex / so susceptible to small parameter variations that predictions are hardly possible Continuous time systems (CTS) vs. discrete time systems (DTS): —In CTS state changes might happen at any time, even uncountable often within any finite time interval —In DTS there is at most a countable number of state changes within any finite time interval —at arbitrary times, or —only at certain prescribed instants (e.g. equidistant) —We refer to discrete time systems also as discrete-event systems (DES)

1.11 Case Study Consider remote pipes (rpipe) versus remote procedure calls (rpc) —rpc is like procedure call but procedure is handled on remote server –Client caller blocks until return —rpipe is like pipe but server gets output on remote machine –Client process can continue, non-blocking Goal: study the performance of applications using rpipes to similar applications using rpcs

System Definition Client and Server and Network Key component is “channel”, either a rpipe or an rpc —Only the subset of the client and server that handle channel are part of the system Client Network Server - Try to minimize effect of components outside system

Services There are a variety of services that can happen over a rpipe or rpc Choose data transfer as a common one, with data being a typical result of most client-server interactions Classify amount of data as either large or small Thus, two services: —Small data transfer —Large data transfer

Metrics Limit metrics to correct operation only (no failure or errors) Study service rate and resources consumed A) elapsed time per call B) maximum call rate per unit time C) Local CPU time per call D) Remote CPU time per call E) Number of bytes sent per call

Parameters Speed of CPUs —Local —Remote Network —Speed —Reliability (retrans) Operating system overhead —For interfacing with channels —For interfacing with network Time between calls Number and sizes —of parameters —of results Type of channel —rpc —Rpipe Other loads —On CPUs —On network SystemWorkload

Key Factors Type of channel —rpipe or rpc Speed of network —Choose short (LAN) across country (WAN) Size of parameters —Small or larger Number of calls —11 values: 8, 16, 32 …1024 All other parameters are fixed (Note, try to run during “light” network load)

Evaluation Technique Since there are prototypes, use measurement Use analytic modeling based on measured data for values outside the scope of the experiments conducted

Workload Synthetic program generated specified channel requests Will also monitor resources consumed and log results Use “null” channel requests to get baseline resources consumed by logging —(Remember the Heisenberg principle!)

Experimental Design Full factorial (all possible combinations of factors) 2 channels, 2 network speeds, 2 sizes, 11 numbers of calls —  2 x 2 x 2 x 11 = 88 experiments

Data Analysis Analysis of variance will be used to quantify the first three factors —Are they different? Regression will be used to quantify the effects of n consecutive calls —Performance is linear? Exponential?