Advanced Computer Networks CS716 Advanced Computer Networks By Dr. Amir Qayyum 1
Lecture No. 17
Virtual Paths with ATM Two level hierarchy of virtual connection: 8-bit VPI and 16-bit VCI Switches in the public network use 8-bit VPI Corporate sites use full 24-bit address (VPI + VCI) Much less connection-state info in switches Virtual path: fat pipe with bundle of virtual circuits
Physical Layers for ATM ATM may run over several phy media ATM was assumed to run over SONET but both are entirely separable entities ATM frame boundaries to be correctly identified Successive 53-byte ATM frames in payload SONET overhead byte points to the payload Another way is to calculate CRC (5th byte of the cell)
ATM and LANs ATM grew out of the telephone community and later used for computer communication Significant advantage of performance and better scalability of switched over shared media No distance limitation in ATM making it a good choice for high-performance LAN backbone Point-to-point, long distance Gigabit Ethernet is a competing technology with ATM
ATM as a LAN Backbone Different from traditional LANs; no native support for broadcast or multicast
How to broadcast to all nodes on an ATM LAN ? ATM in a LAN How to broadcast to all nodes on an ATM LAN ? Without knowing all the addresses Without setting up VC to all of them
ATM in a LAN Two solutions Redesign protocols that consider LAN different from what ATM can provide (e.g. ATMARP) Make ATM behave like shared media, without loosing performance advantage of switched media (e.g. LANE) ATM address is different from a unique 48-bit MAC address
Shared Ethernet Emulation with LANE All hosts think they are on the same Ethernet Ethernet Switch H ATM Switch H LANE / Ethernet Adaptor Card H H H Ethernet Switch H ATM Switch H LANE / Ethernet Adaptor Card H H H
LAN Emulation (LANE) with ATM Transparent shared media emulation of ATM Adds (not changes) functionality to ATM switches Each device needs a global MAC address, as well as an ATM address to establish a VC
LAN Emulation (LANE) with ATM Devices connect as LAN Emulation Clients (LEC) LANE provides Ethernet-like interface to LECs Similar solutions for other networks: VPNs on WANs, VLANs on large, switched Ethernets
ATM / LANE Protocol Layers Higher-layer Higher-layer protocols protocols (IP, ARP, . . .) (IP, ARP, . . .) Ethernet-like interface Signalling Signalling + LANE + LANE AAL5 AAL5 ATM ATM ATM PHY PHY PHY PHY Host Switch Host
Clients and Servers in LANE LAN Emulation Client (LEC) Host, bridge, router or switch LAN Emulation Server (LES) Maintains client’s MAC and ATM addresses Maintains ATM address of BUS
Clients and Servers in LANE LAN Emulation Configuration Server (LECS) High-level network management when LEC starts up Reachable by preset VC (recall known server port#) Maintains mapping of ATM address to LANE type
Clients and Servers in LANE Broadcast and Unknown Server (BUS) Emulates broadcast and multicast; critical to LANE Uses point-to-multipoint VC with all clients Servers physically located in one or more devices LECS
LANE Registration Client contacts LECS on predefined VC, and sends ATM address to it LECS returns LAN type, MTU and ATM address of LES Client signals connection to LES, and registers MAC and ATM addresses with LES LES returns ATM address of BUS Client signals connection to BUS Bus adds client to point-to-multipoint VC H3 LES BUS ATM Network H1 H2 LECS
LANE Circuit Setup Client (H1) knows destination MAC address of receiver (H2) Client (H1) sends 1st packet to BUS BUS sends address resolution request to LES LES returns ATM address to client (H1) Client (H1) signals connection to H2 for subsequent packets H3 LES BUS ATM Network H1 H2 LECS
Switches: The Intersections
The Intersections Design intersection to accommodate traffic flows Islamabad Zero Point Pir Wadhai Faizabad Flyover Rawalpindi Saddar Rawal Dam Faizabad Ayub Park Airport Design intersection to accommodate traffic flows
Contention in Switches Some packets destined for same output One goes first Others delayed or dropped Delaying packets requires buffering Finite capacity, some packets must still drop At inputs Increases/adds false contention Sometimes necessary At outputs Can also exert “backpressure”
Output Buffering 1x6 Switch Customer service Standard check-in lines x Mr. A waiting to claim refund of Rs.100 you Mr. X writing complaint letter trying to check-in 1x6 Switch
Input Buffering: Head-of-line Blocking Customer service Standard check-in lines agents are standing by ! x 1x6 Switch Mr. X writing complaint letter a Mr. A waiting to claim refund of Rs.100 trying to check-in you
Backpressure 1x6 Switch Customer service Standard check-in lines x a “no more, please” a you i trying to check-in 1x6 Switch propagation delay requires that switch exerts backpressure before buffer is full; thus used in networks with small propagation delay
Backpressure Switch 1 Switch 2 “no more, please” Propagation delay requires that switch 2 exert backpressure at high-water mark rather than when buffer completely full It is thus typically only used in networks with small propagation delays (e.g., switch fabrics)
Switching Hardware Multi-input multi-output device, getting packets from inputs to the outputs as fast as possible Performance of a switch is limited by I/O bus bandwidth (each packet traverse twice) 1Gbps I/O bus can support ten T3 (45 Mbps) links, three STS-3 (155 Mbps) links, and not even one STS-12 (625 Mbps) link Success or failure of a new protocol depend on whether it takes advantage of switch’s capabilities
Switching Fabric Special-purpose (switching) hardware General problem Connect N inputs to M outputs (NxM switch) Often N=M (bidirectional links) Design goals High throughput: want aggregate close to MIN (sum of inputs, sum of outputs) Avoid contention (fabric faster than ports) Good scalability:linear size/cost groth in N/M
Switching Fabric and Ports Input Output port port Input Output Switch Fabric Avoid contention here port Fabric port Switch fabric Input Output port port Input Output port port
Switch: Fabric and Ports Fabric has a job to deliver packets to the right output Input Output port port Input Output Switch fabric (with small internal buffering) port Fabric port Input Output port port Input Output port port
Ports and Fabric Ports deals with the complexity of the real world Virtual circuit management is handled in ports Determine outpt port using forwarding tables Input port is the first in performance bottlenecks Header processing and handling packet to fabric
Buffering is required at ports Ports and Fabric Buffering is required at ports Buffer management has profound impact on performance Internal (in fabric) or output buffering is normally used Fabric: simply move packets from inputs to outputs
Design Goals - Throughput An n x m switch can provide max ideal throughput of S = S1 + S2 + ……… + Sn Only possible if traffic at inputs is evenly distributed across all outputs Sustained throughput higher than link speed of output is not possible
Design Goals - Throughput Variable size packets affect performance Some operations have constant overhead per packet Switch performs differently for different sizes of packets Packet per second (pps) rate is also important Most switches are subject to internal contention Determine performance under diff traffic loads
Design Goals - Throughput Traffic models are important to throughput Arrival time, output port, packet length Extremely difficult to achieve accurate models Traffic-modeling very successful in telephony Designers now expect high range of throughputs In order to handle a steady stream of 64-byte packets, a 40Gbps switch need a rate of 78M pps !!!
Design Goals - Scalability Cost of hardware rises fast with increasing the number of ports n Adding ports increases hardware & design complexity Scalability in terms of rate of increase in cost Design complexity determines maximum switch size Switch designs run into problems at some maximum number of inputs and outputs
Switch Performance Avoid contention with buffering Good scalability Use output buffering when possible Apply backpressure through fabric Input buffering with “peeking” (non-FIFO semantics) to reduce head-of-line blocking problems Drop packets if input buffer overflows Good scalability O(N) ports Port design complexity O(N) gives O(N2) for switch Port design complexity O(1) gives O(N) for switch
Crossbar (“Perfect”) Switch Problem: hardware scales as O(N2)
Knockout Switch: Pick L from N D 4 8-to-4 concentrator D D D D 3 2x2 random selector D D D D 2 D delay unit Outputs Inputs D D D D D 1 Problem: what if more than L arrive
Shared Memory Switch … … Inputs Outputs Mux Buffer memory Demux W rite Read control control
Self-Routing Fabrics Use source routing on “network” within switch Input port attaches output port number as header Fabric routes packet based on output port Types Banyan network Batcher-Banyan network Sunshine switch
Banyan Network No contention if inputs are sorted and unique Sends 0 bit up Sends 1 bit down MSB LSB No contention if inputs are sorted and unique
Banyan Network MSB LSB Sends 0 bit up, 1 bit down
Batcher (Merge Sort) Network Routing packets through a Batcher network Batcher-Banyan Network Attach the two-back-to-back Arbitrary unique permutations routed without contention
Batcher-Banyan Network sends 1 bit up sends 0 bit down sends 0 bit up sends 1 bit down
(marks overflow packets) Sunshine Switch (marks overflow packets) Like a Knockout switch, except Recirculates overflow packets i.e., when more than L arrive in one cycle