Download presentation
Presentation is loading. Please wait.
Published byNoelia Ingalls Modified over 10 years ago
1
Using Edge-To-Edge Feedback Control to Make Assured Service More Assured in DiffServ Networks K.R.R.Kumar, A.L.Ananda, Lillykutty Jacob Centre for Internet Research School of Computing National University of Singapore
2
Outline Introduction –Need for QoS –Solutions TCP over DiffServ –Issues CATC –Key Observations –Design Considerations –Topology –Edge-to-Edge Feedback Architecture –Marking Algorithm Simulation Details Results and Analysis Deployment Inferences and Future work
3
Introduction Need for QoS –An exponential growth in traffic resulted in deterioration of QoS. –Over provisioning of networks could be a solution. –A better solution: An intelligent network service with better resource allocation and management methods,
4
Solutions Integrated Service –Per flow based QoS. –Not scalable. Differentiated services –QoS for aggregated flows –Scalable –The philosophy: simpler at the core (AQM), complex at the edges.
5
DiffServ Classifier Meter Marker Shaper/ Dropper Packets Logical View of a Packet Classifier and Traffic Conditioner Drop Forward
6
DiffServ contd.. Per-Hop behaviours –Expedited forwarding: Deterministic QoS –Assured forwarding: Statistical QoS Classifier Traffic Conditioner –Token Bucket (TB), Time Sliding Window (TSW) Meter Marker Shaper/Dropper
7
TCP over DiffServ Recent measurements have shown TCP flows being in majority (95% approx. of byte share). TCP flows are much more sensitive to transient congestion. Unruly flows like UDP kills TCP traffic Bandwidth assurance affected by size of target rate. Biased against –Longer RTTs –Smaller window sizes
8
Congestion Aware Traffic Conditioner (CATC) Key Observations –Markers,one of the major building blocks of a traffic conditioner helps in resource allocation. –Proper understanding of transient congestion in the network helps. –Edge routers have a better understanding of the domain traffic. –An early indication of congestion in a network helps to prioritize the packets in advance. –Existing feedback mechanisms are end-to-end. Eg: ECN
9
CATC contd.. Design Considerations –Markers should Be least sensitive to marker or TCP parameters. Be transparent to end hosts. Maintain optimum marking. Minimize synchronizations. Be fair to different target sizes. Be congestion aware.
10
Topology
11
Edge-to-Edge Feedback architecture Two edge routers –Control sender (CS) and control receiver (CR) Upstream: –At CS: CS sends control packets (CP) at regular interval of time, control packet interval (cpi). CPs are given highest priority. –At Core: Core routers maintain the status of drops of the best effort packets. Information maintained as a status flag to a max. of cpi time. CPs congestion notification (CN) bit set or reset based on status flag. –At CR: Responds to the incoming CP with a CN bit set by setting the congestion echo (CE) bit of the outgoing acknowledgement.
12
Feedback arch. Contd Downstream –At CS: Maintains a parameter, congestion factor (cf). Cf is set to 1 or 0 based on status of the CE bit in acknowledgement received.
13
Marking algorithm For each packet arrival If avg_rate cir then mp=mp+(1- avg_rate/cir)*(1+cf*(cir/cir_max)); mark the packet using : cp 11 w.p. mp (marked packets) cp 00 w.p. (1-mp) (unmarked packets)
14
Marking Algo. Contd.. else if avg_rate > cir then mp=mp+ (1- avg_rate/cir)*(1-cf*(cir/cir_max)); mark the packet using : cp 11 w.p. mp (marked packets) cp 00 w.p. (1-mp) (unmarked packets)
15
Marking Algo. Contd.. where, avg_rate = the rate estimate on each packet arrival mp = marking probability ( 1) cir = committed information rate (target rate) cf = congestion factor cir_max = maximum committed information rate also, cp denotes codepoint and w.p. denotes with probability.
16
Algo contd.. Marking probability computation based on: –cir –avg_rate –cf –cir_max among all cirs.
17
Algo. Contd.. The effect on mp: –i)Flow component (1- avg_rate/cir) constantly compares the average rate observed with the target rate to keep the rate closer to the target. –ii)Network component cf*(cir/cir_max) provides a dynamic indication of congestion level status in the network. The marking probability increment is done in proportion to the target rate by multiplying cf with a weight factor cir/cir_max to mitigate the impact of the target rates.
18
Simulation Details NS (2.1b7a) simulator on Red Hat 7.0 Modified Nortels DiffServ module for our architecture implementation. Core routers use RIO like mechanism FTP bulk data transfer for TCP traffic
19
Simulation Parameters
20
Simulation details contd.. Experiments conducted: –Assured services (AS) for aggregates. AS in under- and well- subscribed cases. AS in the oversubscribed case. –Protection from BE UDP flows –Effect of UDP flows with assured (target) rates.
21
R&A: under- and well- subscribed
22
R&A:over-subscribed
23
R&A: Goodput vs Time Graph (2/6 Mbps target rate.)
24
Analysis CATC Able to achieve the target rates for the under- and well- subscribed cases. Maintain the achieved rate close to its target rate. Total link utilization remains more or less constant throughout.
25
R&A:AS in presence of BE UDP and TCP
26
R&A:AS in presence of AS UDP and BE TCP
27
Analysis CATC –Achieves goodput close to the target rates. –Succeeds in taking the share of BE TCP and UDP flows in the worst case scenario. –The average link utilization pretty good. –The AS UDP flow gets its assured rate.
28
Deployment MPLS over DiffServ. Marker anywhere (lack of sensitivity to marker parameters).
29
Inferences and Future work The architecture is transparent to TCP sources and hence doesnt require any modifications at the end hosts. The edge-to-edge feedback control loop helps the marker to take proactive measures in maintaining the assured service effectively, especially during periods of congestion. A single feedback control is used for an aggregated flow. Hence this architecture is scalable to any number of flows between the two edge gateways. The architecture is adaptive to changes in load and network conditions. The marking algorithm takes care of any bursts in the flows.
30
Future work Extend present architecture to take care of drops in priority queues. A new algorithm to incorporate this.
31
Q&A
32
Thank You!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.