Download presentation
Presentation is loading. Please wait.
Published byBruce Goodwin Modified over 8 years ago
1
Network Performance modelling and simulation Chapter 2 Network Performance Metric
3
CONTENTS 1.1 Typical Performance Measures 1.2 Types of Service Guarantees 1.3 Type of Performance Measurements 1.4 Measurements computing system and networks 1.5 Why we need network Simulation 1.6 Goals of Performance Evaluation & Meta Goals 1.7 Types of communications networks, modeling constructs 1.8 Performance targets for simulation purposes 1.9 Systems: Performance Evaluation View 1.10 System Classifications 1.11 Case Study
4
1.1 Typical Performance Measures Ultimately, users are interested in their applications to run with “satisfactorily” or even “good” performance “Good performance” is subjective and application- dependent: —Frame rate / level of detail / resolution of an online-game, —Sound / speech quality, —Network bandwidth and latency, etc
5
1.1 Typical Performance Measures Objective measures (we only deal with these): —Can be measured —Typically expressed as numerical values —Other persons reproducing the experiment would obtain (nearly) the same values Subjective measures: —Are influenced by individual judging, e.g. speech quality, video quality —Can sometimes be “objectified”; example: specified a method for judging the output of audio codecs, the model aggregates the results of numerous listening tests. —Other person would give different level.
6
1.1 Types of Service Guarantees A service is provided by a service provider upon receiving service requests and under certain conditions For example, an ISP might grant Internet access when you: —Pay your monthly fee —Behave according to specified policies (do not send spam mails, do not offend other netizens, etc.) —Obey certain technical standards (right modem, dial-in numbers, etc.) —And no serious breakdown of network infrastructure happens
7
1.2 Types of Service Guarantees Given that these conditions are fulfilled, the provider might give one of the following promises: —Guaranteed quality of service (QoS): —Service provider claims that a certain service level will be provided under any circumstances —Anything worse is a contract violation —Sometimes additional requirements may be posed, e.g.: to guarantee a certain end-to-end delay in a network, you must not exceed some given sending rate; example: –An ISDN B-channel guarantees 64 kbit/sec; anything in excess is dropped
8
1.2 Types of Service Guarantees Expected QoS or statistical QoS : —Service provider promises “more or less” the service. —Problem: how to specify this exactly? examples: —At most x out of N consecutive data packets will get lost —At most x / N * 100% of data packets will get lost —The probability that a packet is lost is x / N —The first two examples make the difference between short term service and long term service guarantees: —1st case: “at most 1 packet out of 100”. —2nd case: “at most 10.000 packets out of 1.000.000”. Best effort service: —“I will do my very best but i promise nothing”
9
1.3 Type of Performance Measurements System-oriented vs. application-oriented measures: —System-oriented measures are independent of specific applications —Application-oriented measures might depend in complex ways on system-oriented measures Example – video conferencing over Internet: —Application-oriented measures: frame rate, resolution, color depth, SNR, absence of distortions, turnaround times, lip synchronization —System-/network-oriented measures: throughput, delay, jitter, losses, blocking probabilities,.. Often there is no simple way to predict application measures from system-oriented measures: —A video conferencing tool might be aware of packet losses and can provide mechanisms to conceal them, providing the user with slightly degraded video quality instead of dropouts or black boxes.
10
1.4 Measures for Different Types of Computing Systems Desktop systems (single user): —Response times, graphics performance (e.g. level of detail) Server systems (multiple users): —Throughput (processor, I/O bandwidth) —Reliability (MTBF, mean time between failures) —Availability (fraction of downtime per year / unit time) —Utilization Embedded systems: —Energy consumption —Memory consumption —Real-time performance: –Number of deadline misses –Jitter for periodic tasks –Interrupt latencies —System utilization
11
1.4 Measures for Communication Networks Delay: —Processing and queuing delays in network elements —Medium access delay in MAC protocols Jitter: delay variation Throughput: number of requests / packets which go through the network per unit time Goodput: similar to throughput, but without overhead (e.g. control packets, retransmissions) Loss rate: fraction of packets which are lost or erroneous Utilization: fraction of time a communications link / resource is busy Blocking probability: probability of getting no (connection-oriented) service; sometimes you get no line when you pick up the phone Dropping probability : in cellular systems; probability that an ongoing call gets lost upon handover
12
1.5 Why we need network Simulation Recently, design and management of Network systems are becoming an increasingly challenging task. Why??? Traditional static calculations are no longer reasonable approaches for validating the implementation of a new network because of stochastic nature of network traffic and the complexity of the overall system. Poor network performance may have serious impacts on the successful operation of organization businesses. Network designers increasingly rely on methods that help them evaluate several design proposals before the final decision is made and the actual systems is built.
13
1.5 Why we need network Simulation Network designers typically set the following objectives: —Performance modeling: Obtain statistics for various performance parameters of links, routers, switches, buffers, response time, etc. —Failure analysis: Analyze the impacts of network element failures. —Network design: Compare statistics about alternative network designs to evaluate the requirements of alternative design proposals. —Network resource planning: Measure the impact of changes on the network's performance, such as addition of new users, new applications, or new network elements.
14
1.6 Goals of Performance Evaluation General Goals: —Determine certain performance measures for existing systems or for models of (existing or future) systems. —Develop new analytical and methodological foundations, e.g. in queuing theory, simulation etc. —Find ways to apply theoretical approaches in creating and evaluating performance models. Typical specific goals —Bottleneck analysis and optimization —Comparison of alternative systems / protocols / algorithms —Capacity planning —Contract validation —Performance analysis is often a tool in investment decisions or mandated by other economic reasons
15
1.6 Meta Goals The methods, workloads, performance measures etc. should be relevant and objective; examples: —To determine the maximum network throughput it is appropriate to use a high load instead of a (typical) low load. —To test your system under errors you have to force these errors. Important Notes —Results should be communicated in a clear, concise and understandable manner to the persons setting the goals (often suits :o) —The performance assessment should be fair and careful, e.g. when comparing OUR system to THEIRs —The results should be complete, e.g. you should not restrict to a single workload favoring your system — The results should be reproducible: Often not easy, e.g., measurements in wireless systems –
16
1.7 Types of communications networks, modeling constructs A communications network consists of network elements, nodes (senders and receivers) and connecting communications media. Among several criteria for classifying networks we use two: transmission technology and scale. Transmission technology classify networks as broadcast and point-to-point networks The scale or distance also determines the technique used in a network: wireline or wireless, and the physical area coverage (LAN, MAN, WAN, etc.)
17
1.8 Performance targets for simulation purposes list of network attributes that have a profound effect on the network performance and are usual targets of network modeling. These attributes are the goals of the statistical analysis, design, and optimization of computer networks. — Link capacity, or Channel: is the number of messages per unit time handled by a link. It is usually measured in bits per second. — Bandwidth: is the difference between the highest and lowest frequencies available for network signals. — Response time: is the time it takes a network system to react to a certain source's input. — Latency: is the amount of time it takes for a unit of data to be transmitted across a network link.
18
1.8 Performance targets for simulation purposes —Routing protocols: A route may traverse through multiple links with different capacities, latencies, and reliabilities. The objective of the routing protocols is to find an optimal or near optimal route between source and destination avoiding congestions. —Traffic engineering: Implies the use of mechanisms to avoid congestion by allocating network resources optimally, rather than continually increasing network capacities. —Protocol overhead: Protocol messages and application data are embedded inside the protocol data units, such as frames, packets, and cells. —Burstiness: The most dangerous cause of network congestion is the burstiness of the network traffic. —Frame size: large frames because they can fill up routers buffers much faster than smaller frames resulting in lost frames and retransmissions. —Dropped packet rate: The rate of dropping packets at the lower layers determines the rate of retransmitting packets at the transport layer.
19
1.9 Systems: Performance Evaluation View Features of a system for performance evaluation purposes
20
1.9 Elements of a System: Input Workload: specifies the arrival of requests which the system is supposed to serve; examples: —Arrival of packets to a communication network —Arrival of programs to a computer system —Arrival of instructions to a processor —Arrival of read/write requests to a database Workload characteristics: —Request type (e.g. TCP packet vs. UDP packet vs.... ) —Request size / service time / resource consumption (e.g. packet lengths) —Inter-arrival times of requests —(statistical) dependence between requests
21
1.9 Elements of a System: Input Configuration or parameters: in general all inputs influencing the systems operation; examples: —Maximum # of retransmissions in ARQ schemes —Time slice length in a multitasking operating system not all of these can be (easily) controlled —Factors: a subset of the parameters, which are purposely varied during a performance study to assess their influence Error model: specifies the types and frequencies of failures of system components or communication channels: —Persistent vs. transient errors: —Component failures are often persistent —Channel errors are often transient —System malfunctioning vs. malicious behavior caused by an adversary —Occurrence of “chain reactions” or “alarm storms”
22
1.9 Elements of a System: Others The system generates an output, some parts of which are presented to the user —It can also have an internal state, which determines its operations together with the input —There could be feedback: some parts of the output serve as input —To obtain desired performance measures, the output or the observable system state may have to be processed further, by some “function” f
23
1.10 Classifications of Systems There are a number of classifications of systems [1] Static vs. dynamic systems: —In a static system the output depends only on the current input, but not on past inputs or the current time —A dynamic system might depend on older inputs (“memory”) or on the current time (a system needs internal state to have memory). Time-varying vs. time-invariant systems: —Time-invariant: the output might depend on the current and past inputs, but not on the current time —In a time-varying system this restriction is removed Open systems vs. closed systems: —Open systems have an “outside world” which is not controllable and which might generate workloads, failures, or changes in configuration —In a closed system everything is under control
24
1.10 Classifications of Systems Stochastic systems vs. deterministic systems: —In a stochastic system at least one part of the input or internal state is a random variable / random process outputs are also random. —Almost all “real” systems are stochastic systems because of –“true” randomness, or –because the system is so complex / so susceptible to small parameter variations that predictions are hardly possible Continuous time systems (CTS) vs. discrete time systems (DTS): —In CTS state changes might happen at any time, even uncountable often within any finite time interval —In DTS there is at most a countable number of state changes within any finite time interval —at arbitrary times, or —only at certain prescribed instants (e.g. equidistant) —We refer to discrete time systems also as discrete-event systems (DES)
25
1.11 Case Study Consider remote pipes (rpipe) versus remote procedure calls (rpc) —rpc is like procedure call but procedure is handled on remote server –Client caller blocks until return —rpipe is like pipe but server gets output on remote machine –Client process can continue, non-blocking Goal: study the performance of applications using rpipes to similar applications using rpcs
26
System Definition Client and Server and Network Key component is “channel”, either a rpipe or an rpc —Only the subset of the client and server that handle channel are part of the system Client Network Server - Try to minimize effect of components outside system
27
Services There are a variety of services that can happen over a rpipe or rpc Choose data transfer as a common one, with data being a typical result of most client-server interactions Classify amount of data as either large or small Thus, two services: —Small data transfer —Large data transfer
28
Metrics Limit metrics to correct operation only (no failure or errors) Study service rate and resources consumed A) elapsed time per call B) maximum call rate per unit time C) Local CPU time per call D) Remote CPU time per call E) Number of bytes sent per call
29
Parameters Speed of CPUs —Local —Remote Network —Speed —Reliability (retrans) Operating system overhead —For interfacing with channels —For interfacing with network Time between calls Number and sizes —of parameters —of results Type of channel —rpc —Rpipe Other loads —On CPUs —On network SystemWorkload
30
Key Factors Type of channel —rpipe or rpc Speed of network —Choose short (LAN) across country (WAN) Size of parameters —Small or larger Number of calls —11 values: 8, 16, 32 …1024 All other parameters are fixed (Note, try to run during “light” network load)
31
Evaluation Technique Since there are prototypes, use measurement Use analytic modeling based on measured data for values outside the scope of the experiments conducted
32
Workload Synthetic program generated specified channel requests Will also monitor resources consumed and log results Use “null” channel requests to get baseline resources consumed by logging —(Remember the Heisenberg principle!)
33
Experimental Design Full factorial (all possible combinations of factors) 2 channels, 2 network speeds, 2 sizes, 11 numbers of calls — 2 x 2 x 2 x 11 = 88 experiments
34
Data Analysis Analysis of variance will be used to quantify the first three factors —Are they different? Regression will be used to quantify the effects of n consecutive calls —Performance is linear? Exponential?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.