Download presentation
1
Router Design and Packet Scheduling
2
IP Router A router consists Router implements two main functions
. . A router consists A set of input interfaces at which packets arrive A se of output interfaces from which packets depart Router implements two main functions Forward packet to corresponding output interface Manage congestion
3
Generic Router Architecture
Input and output interfaces are connected through a backplane A backplane can be implemented by Shared memory Low capacity routers (e.g., PC-based routers) Shared bus Medium capacity routers Point-to-point (switched) bus High capacity routers input interface output interface Inter- connection Medium (Backplane)
4
What a Router Looks Like
Cisco GSR 12416 Juniper M160 19” 19” Capacity: 160Gb/s Power: 4.2kW Capacity: 80Gb/s Power: 2.6kW 6ft 3ft 2ft 2.5ft
5
Points of Presence (POPs)
A B C POP1 POP3 POP2 POP4 D E F POP5 POP6 POP7 POP8
6
Basic Architectural Components of an IP Router
Routing Protocols Control Plane Routing Table Datapath per-packet processing Forwarding Table Switching
7
Per-packet processing in an IP Router
1. Accept packet arriving on an ingress line. 2. Lookup packet destination address in the forwarding table, to identify outgoing interface(s). 3. Manipulate packet header: e.g., decrement TTL, update header checksum. 4. Send packet to outgoing interface(s). 5. Queue until line is free. 6. Transmit packet onto outgoing line.
8
Generic Router Architecture
Header Processing Data Hdr Lookup IP Address Update Header Data Hdr Queue Packet ~1M prefixes Off-chip DRAM Address Table IP Address Next Hop Buffer Memory ~1M packets Off-chip DRAM
9
Generic Router Architecture
Lookup IP Address Update Header Header Processing Address Table Buffer Manager Buffer Memory Lookup IP Address Update Header Header Processing Address Table Buffer Manager Buffer Memory Lookup IP Address Update Header Header Processing Address Table Buffer Manager Buffer Memory
10
Packet processing is getting harder
CPU Instructions per minimum length packet since 1996 This is particularly clear if we look at the number of instructions available to process each arriving packet. The graph is derived from the previous page, and shows that the number of Instructions per packet is falling exponentially with time. This means that we need to start optimizing the information processing in the routers, rather than tweaking the BW efficiency and increasing the complexity of the forwarding path. If, in the future, BW will be even more plentiful then there is little point in trying to use the bandwidth efficiently, or even trying to provide differentiated qualities of service. One alternative might be to do packet switching in optics. But PS requires packets to be buffered during times of congestion, which is not yet economically feasible using optics. For this reason, optical packet switching is not yet viable. On the other hand all-optical circuit switches with very high capacities are already commercially available.
11
Speedup C – input/output link capacity
RI – maximum rate at which an input interface can send data into backplane RO – maximum rate at which an output can read data from backplane B – maximum aggregate backplane transfer rate Back-plane speedup: B/C Input speedup: RI/C Output speedup: RO/C input interface output interface Inter- connection Medium (Backplane) C RI B RO C
12
Function division Input interfaces: Output interfaces:
Must perform packet forwarding – need to know to which output interface to send packets May enqueue packets and perform scheduling Output interfaces: input interface output interface Inter- connection Medium (Backplane) C RI B RO C
13
Three Router Architectures
Output queued Input queued Combined Input-Output queued
14
Output Queued (OQ) Routers
input interface Only output interfaces store packets Advantages Easy to design algorithms: only one congestion point Disadvantages Requires an output speedup of N, where N is the number of interfaces not feasible output interface Backplane RO C
15
Input Queueing (IQ) Routers
Only input interfaces store packets Advantages Easy to built Store packets at inputs if contention at outputs Relatively easy to design algorithms Only one congestion point, but not output… need to implement backpressure Disadvantages Hard to achieve utilization 1 (due to output contention, head-of-line blocking) However, theoretical and simulation results show that for realistic traffic an input/output speedup of 2 is enough to achieve utilizations close to 1 input interface output interface Backplane RO C
16
Combined Input-Output Queueing (CIOQ) Routers
Both input and output interfaces store packets Advantages Easy to built Utilization 1 can be achieved with limited input/output speedup (<= 2) Disadvantages Harder to design algorithms Two congestion points Need to design flow control Note: recent results show that with a input/output speedup of 2, a CIOQ can emulate any work-conserving OQ [G+98,SZ98] input interface output interface Backplane RO C
17
Generic Architecture of a High Speed Router Today
Combined Input-Output Queued Architecture Input/output speedup <= 2 Input interface Perform packet forwarding (and classification) Output interface Perform packet (classification and) scheduling Backplane Point-to-point (switched) bus; speedup N Schedule packet transfer from input to output
18
Backplane Point-to-point switch allows to simultaneously transfer a packet between any two disjoint pairs of input-output interfaces Goal: come-up with a schedule that Meet flow QoS requirements Maximize router throughput Challenges: Address head-of-line blocking at inputs Resolve input/output speedups contention Avoid packet dropping at output if possible Note: packets are fragmented in fix sized cells (why?) at inputs and reassembled at outputs In Partridge et al, a cell is 64 B (what are the trade-offs?)
19
Head-of-line Blocking
The cell at the head of an input queue cannot be transferred, thus blocking the following cells Cannot be transferred because is blocked by red cell Input 1 Output 1 Cannot be transferred because output buffer full Input 2 Output 2 Input 3 Output 3
20
Solution to Avoid Head-of-line Blocking
Maintain at each input N virtual queues, i.e., one per output Input 1 Output 1 Output 2 Input 2 Output 3 Input 3
21
Cell transfer Schedule:
Ideally: find the maximum number of input-output pairs such that: Resolve input/output contentions Avoid packet drops at outputs Packets meet their time constraints (e.g., deadlines), if any Example Assign cell preferences at inputs, e.g., their position in the input queue Assign cell preferences at outputs, e.g., based on packet deadlines, or the order in which cells would depart in a OQ router Match inputs and outputs based on their preferences Problem: Achieving a high quality matching complex, i.e., hard to do in constant time
22
A Case Study [Partridge et al ’98]
Goal: show that routers can keep pace with improvements of transmission link bandwidths Architecture A CIOQ router 15 (input/output) line cards: C = 2.4 Gbps Each input card can handle up to 16 (input/output) interfaces Separate forward engines (FEs) to perform routing Backplane: Point-to-point (switched) bus, capacity B = 50 Gbps (32 MPPS) B/C = 20, but 25% of B lost to overhead (control) traffic
23
Router Architecture packet header
24
Router Architecture 1 Data out 15 Data in Network processor
input interface output interfaces 1 Backplane Data out 15 Data in Update routing tables Set scheduling (QoS) state forward engines Network processor Control data (e.g., routing)
25
Router Architecture: Data Plane
Line cards Input processing: can handle input links up to 2.4 Gbps (3.3 Gbps including overhead) Output processing: use a 52 MHz FPGA; implements QoS Forward engine: 415-MHz DEC Alpha processor, three level cache to store recent routes Up to 12,000 routes in second level cache (96 kB); ~ 95% hit rate Entire routing table in tertiary cache (16 MB divided in two banks)
26
Router Architecture: Control Plane
Network processor: 233-MHz Alpha running NetBSD 1.1 Update routing Manage link status Implement reservation Backplane Allocator: implemented by an FPGA Schedule transfers between input/output interfaces
27
Data Plane Details: Checksum
Takes too much time to verify checksum Increases forwarding time by 21% Take an optimistic approach: just incrementally update it Safe operation: if checksum was correct it remains correct If checksum bad, it will be anyway caught by end-host Note: IPv6 does not include a header checksum anyway!
28
Data Plane Details: Slow Path Processing
Headers whose destination misses in the cache Headers with errors Headers with IP options Datagrams that require fragmentation Multicast datagrams Requires multicast routing which is based on source address and inbound link as well Requires multiple copies of header to be sent to different line cards
29
Control Plane: Backplane Allocator
Time divided in epochs An epoch consists of 16 ticks of data clock (8 allocation clocks) Transfer unit: 64 B (8 data click ticks) During one epoch, up to 15 simultaneous transfers in an epoch One transfer: two transfer units (128 B of data auxiliary bits) Minimum of 4 epochs to schedule and complete a transfer but scheduling is pipelined. Source card signals that it has data to send to the destination card Switch allocator schedules transfer Source and destination cards are notified and told to configure themselves Transfer takes place Flow control through inhibit pins
30
The Switch Allocator Card
Takes connection requests from function cards Takes inhibit requests from destination cards Computes a transfer configuration for each epoch 15X15 = 225 possible pairings with 15! Patterns
31
Allocator Algorithm
32
The Switch Allocator Disadvantages of the simple allocator
Unfair: there is a preference for low-numbered sources Requires evaluating 225 positions per epoch, which is too fast for an FPGA Solution to unfairness problem: Random shuffling of sources and destinations Solution to timing problem: Parallel evaluation of multiple locations Priority to requests from forwarding engines over line cards to avoid header contention on line cards
33
Summary: Design Decisions (Innovations)
Each FE has a complete set of the routing tables A switched fabric is used instead of the traditional shared bus FEs are on boards distinct from the line cards Use of an abstract link layer header Include QoS processing in the router
34
Packet Scheduling
35
Packet Scheduling Decide when and what packet to send on output link
Usually implemented at output interface flow 1 Classifier flow 2 1 Scheduler 2 flow n Buffer management
36
Why Packet Scheduling? Can provide per flow or per aggregate protection Can provide absolute and relative differentiation in terms of Delay Bandwidth Loss
37
Fair Queueing In a fluid flow system it reduces to bit-by-bit round robin among flows Each flow receives min(ri, f) , where ri – flow arrival rate f – link fair rate (see next slide) Weighted Fair Queueing (WFQ) – associate a weight with each flow [Demers, Keshav & Shenker ’89] In a fluid flow system it reduces to bit-by-bit round robin WFQ in a fluid flow system Generalized Processor Sharing (GPS) [Parekh & Gallager ’92]
38
Fair Rate Computation If link congested, compute f such that f = 4: 8
min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2 8 10 4 6 4 2 2
39
Fair Rate Computation in GPS
Associate a weight wi with each flow i If link congested, compute f such that f = 2: min(8, 2*3) = 6 min(6, 2*1) = 2 min(2, 2*1) = 2 (w1 = 3) 8 10 4 (w2 = 1) 6 4 2 (w3 = 1) 2
40
Generalized Processor Sharing
link Red session has packets backlogged between time 0 and 10 Other sessions have packets continuously backlogged flows 5 1 1 1 1 1 2 4 6 8 10 15
41
Generalized Processor Sharing
A work conserving GPS is defined as where wi – weight of flow i Wi(t1, t2) – total service received by flow i during [t1, t2) W(t1, t2) – total service allocated to al flows during [t1, t2) B(t) – number of flows backlogged
42
Properties of GPS End-to-end delay bounds for guaranteed service [Parekh and Gallager ‘93] Fair allocation of bandwidth for best effort service [Demers et al. ‘89, Parekh and Gallager ‘92] Work-conserving for high link utilization
43
Packet vs. Fluid System GPS is defined in an idealized fluid flow model Multiple queues can be serviced simultaneously Real system are packet systems One queue is served at any given time Packet transmission cannot be preempted Goal Define packet algorithms approximating the fluid system Maintain most of the important properties
44
Packet Approximation of Fluid System
Standard techniques of approximating fluid GPS Select packet that finishes first in GPS assuming that there are no future arrivals Important properties of GPS Finishing order of packets currently in system independent of future arrivals Implementation based on virtual time Assign virtual finish time to each packet upon arrival Packets served in increasing order of virtual times
45
Approximating GPS with WFQ
Fluid GPS system service order 2 4 6 8 10 Weighted Fair Queueing select the first packet that finishes in GPS
46
System Virtual Time Virtual time (VGPS) – service that backlogged flow with weight = 1 would receive in GPS
47
Service Allocation in GPS
The service received by flow i during an interval [t1,t2), while it is backlogged is
48
Virtual Time Implementation of Weighted Fair Queueing
if session j backlogged if session j un-backlogged ajk – arrival time of packet k of flow j Sjk – virtual starting time of packet k of flow j Fjk – virtual finishing time of packet k of flow j Ljk – length of packet k of flow j
49
Virtual Time Implementation of Weighted Fair Queueing
Need to keep per flow instead of per packet virtual start, finish time only System virtual time is used to reset a flow’s virtual start time when a flow becomes backlogged again after being idle
50
System Virtual Time in GPS
1/2 1/8 1/8 1/8 1/8 2*C C 2*C 4 8 12 16
51
Virtual Start and Finish Times
Utilize the time the packets would start Sik and finish Fik in a fluid system 4 8 12 16
52
Goals in Designing Packet Fair Queueing Algorithms
Improve worst-case fairness (see next): Use Smallest Eligible virtual Finish time First (SEFF) policy Examples: WF2Q, WF2Q+ Reduce complexity Use simpler virtual time functions Examples: SCFQ, SFQ, DRR, FBFQ, leap-forward Virtual Clock, WF2Q+ Improve resource allocation flexibility Service Curve
53
Worst-case Fair Index (WFI)
Maximum discrepancy between the service received by a flow in the fluid flow system and in the packet system In WFQ, WFI = O(n), where n is total number of backlogged flows In WF2Q, WFI = 1
54
WFI example Fluid-Flow (GPS)
WFQ (smallest finish time first): WFI = 2.5 WF2Q (earliest finish time first); WFI = 1
55
Hierarchical Resource Sharing
Resource contention/sharing at different levels Resource management policies should be set at different levels, by different entities Resource owner Service providers Organizations Applications 155 Mbps Link 100 Mbps 55 Mbps Provider 1 Provider 2 50 Mbps 50 Mbps Berkeley Stanford. 20 Mbps 10 Mbps EECS Stat Campus seminar video seminar audio WEB
56
Hierarchical-GPS Example
10 Red session has packets backlogged at time 5 Other sessions have packets continuously backlogged 5 1 1 1 1 1 4 1 First red packet arrives at 5 …and it is served at 7.5 10 20
57
Packet Approximation of H-GPS
Idea 1 Select packet finishing first in H-GPS assuming there are no future arrivals Problem: Finish order in system dependent on future arrivals Virtual time implementation won’t work Idea 2 Use a hierarchy of PFQ to approximate H-GPS H-GPS Packetized H-GPS 10 10 GPS GPS 6 4 6 4 GPS GPS GPS GPS 1 3 1 3 2 2 GPS GPS GPS GPS GPS GPS
58
Green packet finish first
Problems with Idea 1 10 The order of the forth blue packet finish time and of the first green packet finish time changes as a result of a red packet arrival 5 1 1 1 1 1 4 1 Green packet finish first Make decision here Blue packet finish first
59
Hierarchical-WFQ Example
10 A packet on the second level can miss its deadline (finish time) by an amount of time that in the worst case is proportional to WFI 5 1 1 1 1 1 4 1 First level packet schedule Second level packet schedule First red packet arrives at 5 …but it is served at 11 !
60
Hierarchical-WF2Q Example
In WF2Q, all packets meet their deadlines modulo time to transmit a packet (at the line speed) at each level 10 5 1 1 1 1 1 4 1 First level packet schedule Second level packet schedule First red packet arrives at 5 ..and it is served at 7
61
WF2Q+ WFQ and WF2Q WF2Q+ Key difference: virtual time computation
Need to emulate fluid GPS system High complexity WF2Q+ Provide same delay bound and WFI as WF2Q Lower complexity Key difference: virtual time computation - sequence number of the packet at the head of the queue of flow i - virtual starting time of the packet B(t) - set of packets backlogged at time t in the packet system
62
Example Hierarchy
63
Uncorrelated Cross Traffic
Delay under H-WFQ Delay under H-SCFQ 60ms 40ms 20ms Delay under H-SFQ Delay under H-WF2Q+ 60ms 40ms 20ms
64
Correlated Cross Traffic
Delay under H-WFQ Delay under H-SCFQ 60ms 40ms 20ms Delay under H-SFQ Delay under H-WF2Q+ 60ms 40ms 20ms
65
Recap: System Virtual Time
Let ta be the starting time of a backlogged interval Backlogged interval – an interval during which the queue is never empty Let t be an arbitrary time during the backlogged interval starting at ta Then the system virtual time at time t, V(t), represents the service time that a flow with (1) weight 1, and that (2) is continuously backlogged during the interval [ta, t), would receive during [ta, t).
66
Why Service Curve? WFQ, WF2Q, H-WF2Q+
Guarantee a minimum rate: N – total number of flows A packet is served no later than its finish time in GPS (H-GPS) modulo the sum of the maximum packet transmission time at each level For better resource utilization we need to specify more sophisticated services (example to follow shortly) Solution: QoS Service curve model
67
What is a Service Model? “external process” Network element delivered traffic offered traffic (connection oriented) The QoS measures (delay,throughput, loss, cost) depend on offered traffic, and possibly other external processes. A service model attempts to characterize the relationship between offered traffic, delivered traffic, and possibly other external processes.
68
Arrival and Departure Process
Network Element Rin Rout bits Rin(t) = arrival process = amount of data arriving up to time t delay buffer Rout(t) = departure process = amount of data departing up to time t t
69
Traffic Envelope (Arrival Curve)
Maximum amount of service that a flow can send during an interval of time t b(t) = Envelope slope = max average rate “Burstiness Constraint” slope = peak rate t
70
Service Curve Assume a flow that is idle at time s and it is backlogged during the interval (s, t) Service curve: the minimum service received by the flow during the interval (s, t)
71
Big Picture Service curve bits bits Rin(t) slope = C t t bits Rout(t)
72
Delay and Buffer Bounds
bits E(t) = Envelope Maximum delay Maximum buffer S (t) = service curve t
73
Service Curve-based Earliest Deadline (SCED)
Packet deadline – time at which the packet would be served assuming that the flow receives no more than its service curve Serve packets in the increasing order of their deadlines Properties If sum of all service curves <= C*t All packets will meet their deadlines modulo the transmission time of the packet of maximum length, i.e., Lmax/C bits 4 3 2 1 t Deadline of 4-th packet
74
Linear Service Curves: Example
bits bits Service curves FTP Video t t Arrival process bits bits Arrival curves t t bits bits Deadline computation t t Video packets have to wait after ftp packets t
75
Non-Linear Service Curves: Example
bits bits Service curves FTP Video t t Arrival process bits bits Arrival curves t t bits bits Deadline computation t t Video packets transmitted as soon as they arrive t
76
Summary Support hierarchical link sharing
WF2Q+ guarantees that each packet is served no later than its finish time in GPS modulo transmission time of maximum length packet Support hierarchical link sharing SCED guarantees that each packet meets its deadline modulo transmission time of maximum length packet Decouple bandwidth and delay allocations Question: does SCED support hierarchical link sharing? No (why not?) Hierarchical Fair Service Curve (H-FSC) [Stoica, Zhang & Ng ’97] Support nonlinear service curves
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.