Download presentation
Presentation is loading. Please wait.
2
Border Gateway Protocol
3
Intra-AS v.s. Inter-AS Intra-AS Inter-AS
4
2 Types of Routing Protocols
5
Path vector routing Path vector v.s. Distance vector Speaker node
creates a routing table advertises routing table to speaker nodes in the neighboring ASs.
6
Initial routing tables in path vector routing
7
Stabilized tables for four ASs
9
BGP Messages BGP messages exchanged using TCP on port 179.
10
BGP Function BGP provides each AS a means to:
Obtain subnet reachability information from neighboring ASs. Propagate reachability information to all AS-internal routers. Determine “good” routes to subnets based on reachability information and policy. allows subnet to advertise its existence to rest of Internet: “I am here”
11
Distributing reachability info
using eBGP session between 3a and 1c, AS3 sends prefix reachability info to AS1. 1c can then use iBGP do distribute new prefix info to all routers in AS1 1b can then re-advertise new reachability info to AS2 over 1b-to-2a eBGP session when router learns of new prefix, it creates entry for prefix in its forwarding table. eBGP session 3c iBGP session 2c 3a 3b 2a AS3 2b 1c AS2 1a 1b AS1 1d
12
Why different Intra- and Inter-AS routing ?
Policy: Inter-AS: admin wants control over how its traffic routed, who routes through its net. Intra-AS: single admin, so no policy decisions needed Scale: hierarchical routing saves table size, reduced update traffic Performance: Intra-AS: can focus on performance Inter-AS: policy may dominate over performance
13
CHAPTER 24 Congestion Control and Quality of Service
14
Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one of two things must happen: The subnet must prevent additional packets from entering the congested region until those already present can be processed. The congested routers can discard queued packets to make room for those that are arriving.
15
Factors that Cause Congestion
Packet arrival rate exceeds the outgoing link capacity. Insufficient memory to store arriving packets Bursty traffic Slow processor
16
Congestion control and quality of service are two issues so closely bound together that
improving one means improving the other and ignoring one usually means ignoring the other.
17
For real-time voice or video it is probably better to
Load Shedding When buffers become full, routers simply discard packets. Which packet is chosen to be the victim depends on the application and on the error strategy used in the data link layer. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. For real-time voice or video it is probably better to throw away old data and keep new packets. Get the application to mark packets with discard priority.
18
So the policy for file transfer is called WINE(old is better than new ).
And the policy for multimedia is called MILK(new is better than old )
19
DATA TRAFFIC The main focus of congestion control and quality of service is data traffic. In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. So, before talking about congestion control and quality of service, we discuss the data traffic itself.
21
An important issues in Packet Switched Network i.e congestion
23
Traffic Descriptor Traffic descriptors are qualitative values that represent a data flow.
26
Figure 24.1 Traffic descriptors
27
Traffic Profiles For our purposes, a data flow can have one of the following traffic profiles: constant bit rate, variable bit rate, or bursty In the bursty data category, the data rate changes suddenly in a very short time. It may jump from zero, for example, to 1 Mbps in a few microseconds and vice versa
28
Figure 24.2 Three traffic profiles
29
Topics discussed in this section:
CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Topics discussed in this section: Network Performance
30
Figure 24.3 Queues in a router
it undergoes three steps before departing,
31
There are mainly two issues which cause congestion.
The first issues is that if the rate of the packet arrival is higher than the packet processing rate, input queue becoming longer and longer. If the packet departure rate is less than the packet processing rate the output queue become longer and longer
32
Figure Packet delay and throughput as functions of load
35
Topics discussed in this section:
CONGESTION CONTROL Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal). Topics discussed in this section: Open-Loop Congestion Control Closed-Loop Congestion Control
36
Figure 24.5 Congestion control categories
Timer
38
The retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion
39
If the receiver does not acknowledge every packet it receives, it may slow down the sender and help prevent congestion The Selective Repeat window is better than the Go-Back-N window for congestion control.
40
A good discarding policy by the routers may prevent congestion and at the same time
may not harm the integrity of the transmission A router can deny establishing a virtual circuit connection if there is congestion in the network or if there is a possibility of future congestion.
42
Figure 24.6 Backpressure method for alleviating congestion
43
Figure Choke packet
44
Selective repeat window is better
Should send cumulative ack Higher priority packets should not be discard
45
Implicit- by symptom source will aware about congestion in network
Explicit signal- send control data Backward signaling- warning source (opposite direction to congestion) Forward signaling
46
Implicit- by symptom source will aware about congestion in network
Explicit signal- send control data Backward signaling- warning source (opposite direction to congestion) Forward signaling
47
Topics discussed in this section:
TECHNIQUES TO IMPROVE QoS In Section 24.5 we tried to define QoS in terms of its characteristics. In this section, we discuss some techniques that can be used to improve the quality of service. We briefly discuss four common methods: scheduling, traffic shaping, admission control, and resource reservation. Topics discussed in this section: Scheduling Traffic Shaping Resource Reservation Admission Control
48
Different applications need different bandwidths
Different applications need different bandwidths. In video conferencing we need to send millions of bits per second to refresh a color screen while the total number of bits in an may not reach even a million. Again applications can tolerate delay in different degrees Source-to-destination delay is another flow characteristic For example, it is more important that electronic mail, file transfer, and Internet access have reliable transmissions than telephony or audio conferencing. In this case, telephony, audio conferencing, video conferencing, and remote log-in need minimum delay, while delay in file transfer or is less important. Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or acknowledgment, which entails retransmission. Jitter is the variation in delay for packets belonging to the same flow. For example, if four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20 units of time. On the other hand, if the above four packets arrive at 21, 23, 21, and 28, they will have different delays: 21,22, 19, and 24. Jitter is defined as the variation in the packet delay. High jitter means the difference between delays is large; low jitter means the variation is small.
49
TECHNIQUES TO IMPROVE QoS
50
Figure FIFO queue In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch) is ready to process them. If the average arrival rate is higher than the average processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is familiar to those who have had to wait for a bus at a bus stop. Figure shows a conceptual view of a FIFO queue.
51
Figure 24.17 Priority queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed last. A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such as multimedia, can reach the destination with less delay.
52
Figure 24.18 Weighted fair queuing
A better scheduling method is weighted fair queuing. In this technique, the packets are still assigned to different classes and admitted to different queues. The queues, however, are weighted based on the priority of the queues; higher priority means a higher weight. The system processes packets in each queue in a round-robin fashion
53
Traffic Shaping Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two techniques can shape traffic: leaky bucket and token bucket
54
rate can vary, but the output rate remains constant
. E.g bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256-byte packets on 1 tick. Figure Leaky bucket If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as there is water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is input to the bucket unless the bucket is empty The input rate can vary, but the output rate remains constant
55
Figure 24.20 Leaky bucket implementation
A host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs.
56
Note A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full.
59
The token bucket allows bursty traffic at a regulated maximum rate.
Note The token bucket allows bursty traffic at a regulated maximum rate.
60
Figure Token bucket
61
The Leaky Bucket Algorithm
The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single-server queue with constant service time. If the bucket (buffer) overflows then packets are discarded.
62
The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.
63
Leaky Bucket Algorithm, cont.
The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input. Does nothing when input is idle. The host injects one packet per clock tick onto the network. This results in a uniform flow of packets, smoothing out bursts and reducing congestion. When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length packets though, it is better to allow a fixed number of bytes per tick
64
Token Bucket Algorithm
In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary, depending on the size of the burst. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy one token. Tokens are generated by a clock at the rate of one token every t sec. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger bursts later.
65
The Token Bucket Algorithm
5-34 (a) Before (b) After.
66
Leaky Bucket vs Token Bucket
LB discards packets; TB does not. TB discards tokens. With TB, a packet can only be transmitted if there are enough tokens to cover its length in bytes. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding up the output. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.