Download presentation
Presentation is loading. Please wait.
2
Advanced Topics in Networking: MPLS and GMPLS Hang Liu Thomson Inc., Corporate Research Lab Princeton, NJ Note: Thank Dr. Debanjan Saha for the teaching materials on MPLS
3
MPLS: Multi-protocol Label Switching
4
3 Topics Introduction History and motivation MPLS mechanisms MPLS protocols RSVP-TE/CR-LDP MPLS applications VPNSs, traffic engineering, restoration
5
4 WHY MPLS ? Ultra fast forwarding Use switching instead of routing IP Traffic Engineering Constraint-based routing Virtual Private Networks Controllable tunneling mechanism Protection and restoration
6
5 IP Forwarding Table 47.1.*.* 47.2.*.* 47.3.*.* 1 2 3 1 2 3 1 2 3
7
6 Hop-by-Hop IP Forwarding 47.1 47.2 47.3 IP 47.1.1.1 1 2 3 1 2 1 2 3
8
7 Routing Lookup Longest prefix match is (was) expensive. Label matching is much less expensive. 10 Gbps 20M packets/sec Switch fabric Control CPU I/F 9.*.*.*14.1.2.1 2 9.1.*.*67.1.2.2 4 9.2.*.*71.1.2.3 6 9.1.1.*113.1.2.1 8 9.2.1.*113.1.2.1 8 9.1.1.171.1.2.3 6 9.1.1.214.1.2.1 2 9.2.1.171.1.2.3 6 PrefixNext Hop Interface
9
8 MPLS Labels 47.1 47.2 47.3 1 2 3 1 2 1 2 3 3 Mapping: 0.40 Request: 47.1 Mapping: 0.50 Request: 47.1
10
9 Label Switched Path 47.1 47.2 47.3 1 2 3 1 2 1 2 3 3 IP 47.1.1.1
11
10 Forwarding Equivalence Classes FEC = “A subset of packets that are all treated the same way by a router” The concept of FECs provides for a great deal of flexibility and scalability In conventional routing, a packet is assigned to a FEC at each hop (i.e. L3 look-up), in MPLS it is only done once at the network ingress Packets are destined for different address prefixes, but can be mapped to common path IP1 IP2 IP1 IP2 LSR LER LSP IP1#L1 IP2#L1 IP1#L2 IP2#L2 IP1#L3 IP2#L3
12
11 MPLS Terminology LDP: Label Distribution Protocol LSP: Label Switched Path FEC: Forwarding Equivalence Class LSR: Label Switching Router LER: Label Edge Router
13
12 Label Distribution Methods LSR1 LSR2 Downstream Label Distribution Label-FEC Binding LSR2 discovers a ‘next hop’ for a particular FEC LSR2 generates a label for the FEC and communicates the binding to LSR1 LSR1 inserts the binding into its forwarding tables If LSR2 is the next hop for the FEC, LSR1 can use that label knowing that its meaning is understood LSR1 LSR2 Downstream-on-Demand Label Distribution Label-FEC Binding LSR1 recognizes LSR2 as its next-hop for an FEC A request is made to LSR2 for a binding between the FEC and a label If LSR2 recognizes the FEC and has a next hop for it, it creates a binding and replies to LSR1 Both LSRs then have a common understanding Request for Binding Both methods are supported, even in the same network at the same time
14
13 Distribution Control Independent LSP Control Ordered LSP Control Next Hop (for FEC) Outgoing Label Incoming Label Each LSR makes independent decision on when to generate labels and communicate them to upstream peers Communicate label-FEC binding to peers once next-hop has been recognized LSP is formed as incoming and outgoing labels are spliced together Label-FEC binding is communicated to peers if: - LSR is the ‘egress’ LSR to particular FEC - label binding has been received from upstream LSR LSP formation ‘flows’ from egress to ingress Definition Comparison Labels can be exchanged with less delay Does not depend on availability of egress node Granularity may not be consistent across the nodes at the start May require separate loop detection/mitigation method Requires more delay before packets can be forwarded along the LSP Depends on availability of egress node Mechanism for consistent granularity and freedom from loops Used for explicit routing and multicast Both methods are supported in the standard and can be fully interoperable
15
14 Label Retention Methods Liberal Label Retention Conservative Label Retention LSR1 LSR2 LSR3 LSR4 Label Bindings for LSR4 Valid Next Hop LSR4’s Label LSR3’s Label LSR2’s Label LSR1 LSR2 LSR3 LSR4 Label Bindings for LSR4 Valid Next Hop LSR4’s Label LSR3’s Label LSR2’s Label LSR maintains bindings received from LSRs other than the valid next hop If the next-hop changes, it may begin using these bindings immediately May allow more rapid adaptation to routing changes Requires an LSR to maintain many more labels LSR only maintains bindings received from valid next hop If the next-hop changes, binding must be requested from new next hop Restricts adaptation to changes in routing Fewer labels must be maintained by LSR Label Retention method trades off between label capacity and speed of adaptation to routing changes
16
15 Label Encapsulation ATMFREthernetPPP MPLS Encapsulation is specified over various media types. Top labels may use existing format, lower label(s) use a new “shim” label format. VPIVCIDLCI“Shim Label” L2 Label “Shim Label” ……. IP | PAYLOAD
17
16 Label Format Exp field used to identify the class of service Stack bit is used identify the last label in the label stack TTL field is used as a time-to-live counter. Special processing rules are used to mimic IP TTL semantics. Label 20 bits Exp 3 bits Stack 1 bit TTL 8 bits
18
17 Label Distribution Protocols Label Distribution Protocol (LDP) Constraint-based Routing LDP (CR-LDP) Extensions to RSVP Extensions to BGP
19
18 LDP:Label Distribution Protocol Label distribution ensures that adjacent routers have a common view of FEC label bindings Routing Table: Addr-prefix Next Hop 47.0.0.0/8 LSR2 Routing Table: Addr-prefix Next Hop 47.0.0.0/8 LSR2 LSR1 LSR2 LSR3 IP Packet 47.80.55.3 Routing Table: Addr-prefix Next Hop 47.0.0.0/8 LSR3 Routing Table: Addr-prefix Next Hop 47.0.0.0/8 LSR3 For 47.0.0.0/8 use label ‘17’ Label Information Base: Label-In FEC Label-Out 17 47.0.0.0/8 XX Label Information Base: Label-In FEC Label-Out 17 47.0.0.0/8 XX Label Information Base: Label-In FEC Label-Out XX 47.0.0.0/8 17 Label Information Base: Label-In FEC Label-Out XX 47.0.0.0/8 17 Step 1: LSR creates binding between FEC and label value Step 2: LSR communicates binding to adjacent LSR Step 3: LSR inserts label value into forwarding base Common understanding of which FEC the label is referring to!
20
19 LDP: Basic Characteristics Provides LSR discovery mechanisms to enable LSR peers to find each other and establish communication Defines four classes of messages DISCOVERY: deals with finding neighboring LSRs ADJACENCY: deals with initialization, keep alive, and shutdown of sessions LABEL ADVERTISEMENT: deals with label binding advertisements, request, withdrawal, and release NOTIFICATION: deals with advisory information and signal error information Runs over TCP for reliable delivery of messages, except for discovery, which uses UDP and IP multicast Designed to be extensible, using messages specified as TLVs (type, value, length) encoded objects.
21
20 LDP Messages INITIALIZATION KEEPALIVE LABEL MAPPING LABEL WITHDRAWAL LABEL RELEASE LABEL REQUEST
22
21 47.1 47.2 47.3 1 2 3 1 2 1 2 3 3 IP 47.1.1.1 Explicitly Routed LSP
23
22 ER LSP - Advantages Operator has routing flexibility policy-based, QoS-based Can use routes other than shortest path Can compute routes based on constraints in exactly the same manner as ATM based on distributed topology database.(traffic engineering)
24
23 ER LSP - discord! Two signaling options proposed in the standards: CR-LDP, RSVP extensions: CR-LDP = LDP + Explicit Route RSVP ext = Traditional RSVP + Explicit Route +Scalability Extensions Market will probably have to resolve it Survival of the fittest not such a bad thing.
25
24 MPLS and QoS in IP Network Integrated Services Differentiated Services
26
25 Integrated Services Internet Applications specify traffic and service specs Tspec: traffic specs including peak rate, maximum packet size, burst size, and mean rate Rspec: service spec, specifically service rate Two classes of service defined Guaranteed service: satisfies hard guarantees on bandwidth and delay Controlled load service: provides service similar to that in “ unloaded network ” RSVP was extended to RSVP-TE support signaling RSVP was further extend to add MPLS support
27
26 Differentiated Services Internet IP packets carry 6-bit service code points (DSCP) Potentially support 64-different classes of services Routers map DSCP to per-hop-behavior (PHB) PHBs can be standard or local Standard PHBs include Default: No special treatment or best effort Expedited forwarding (EF): Low delay and loss Assured forwarding (AF): Multiple classes, each class with multiple drop priorities LSRs don ’ t sort based on IP headers, hence DSCPs need to be mapped to EXP field in MPLS shim header Exp field is only 3-bit wide – can support only 8 DSCPs/PHBs Labels can be used if more than 8 PHBs need to be supported Same approach can be used for link layers which do not use Shim headers, e.g. ATM
28
27 Traffic Engineering with RSVP Sender Receiver PATH {Tspec} RESV {Rspec} PATH {Tspec} RESV {Rspec}
29
28 Label Distribution with RSVP-TE PATH {Tspec} RESV {Rspec} {Label = 5} RESV {Rspec} {Label = 10} Sender PATH {Tspec} RESV {Rspec} PATH {Tspec} RESV {Rspec}
30
29 MPLS Protection End-to-end protection Fast node and link reroute
31
30 MPLS Protection End-to-end Path Protection A C B D E F Backup LSP Primary LSP Backup and primary LSPs should be route diverse
32
31 MPLS Protection: Fast Reroute LSR A LSR F LSR E LSR D LSR C LSR B Detour to avoid AB Detour to avoid BC Detour to avoid CD Detour to avoid DE Detour to avoid link DE Detour around node or link failures Example LSP shown traverses (A, B, C, D, E, F) Each detour avoids Immediate downstream node & link towards it Except for last detour: only avoids link DE
33
32 Detour Merging LSR A LSR F LSR E LSR D LSR C LSR B Detour to avoid AB Detour to avoid BC Merged detour to avoid AB and BC Reduces state maintained Improves resource utilization
34
33 MPLS Protection Types 1+1: Backup LSP established in advance, resources dedicated, data simultaneously sent on both primary and backup Switchover performed only by egress LSR Fastest, but most resource intensive 1:1 : Same as 1+1 with the difference that data is not sent on the backup Requires failure notification to the ingress LSR to start transmitting on backup Notification may be send to egress also Resources in the backup may be used by other traffic Low priority traffic (e.g., plain IP traffic), shared by other backup paths
35
34 MPLS VPN: The Problem 10.1/16 10.2/16 10.3/16 Provider Network Customer 1 Site 1 Customer 1 Site 2 Customer 1 Site 3 Customer 2 Site 3 Customer 2 Site 1 Customer 2 Site 2
36
35 MPLS VPN: The Model 10.1/16 10.2/16 10.3/16 Customer 1 Site 1 Customer 2 Site 1 Customer 2 Site 3 Customer 1 Site 3 Customer 2 Site 2 Customer 1 Site 2 Customer 1 Virtual Network Customer 2 Virtual Network
37
36 MPLS VPN: The Solution 10.1/16 10.2/16 10.3/16 Customer 1 Site 1 Customer 1 Site 2 Customer 1 Site 3 Customer 2 Site 3 Customer 2 Site 1 Customer 2 Site 2 VRF 1 VRF 2 MPLS LSP
38
GMPLS: Generalized MPLS & ASON: Automatically Switched Optical Network
39
38 Outline ASON Control Plane Standards UNI and NNI Protection and Restoration
40
39 Traditional Management Plane for Optical Transport Networks A lot of manual operations Integration of different EMS and NMS is complex multiple types of equipment from different vendors with different technologies Automatic end-to-end provisioning is not easy planning, path computation, connection establishment Class 5 IP Optical Transport Network (OTN) FR/ATM IP FR/ATM Class 5 Other NMS EMS 1 EMS 3 EMS 2
41
40 Distributed Control Plane Distributed control plane offers automatic neighbor and topology discovery automatic end-to-end provisioning and connection modification scalability and interoperability unified traffic engineering and protection/restoration In an environment where IP router networks are interconnected via a mesh optical network Optical Domain Optical Domain Optical Domain Client Network (IP, ATM, SDH) Optical Transport Network ENNI UNI signaling and routing over control channel ENNI INNI NMS/EMS SPC SC
42
41 ASON Control Plane Goals of ASON control plane Facilitate configuration of connections within an optical transport network in a reliable, efficient, scalable, interoperable and automatic way Switched connection (SC): requested by a user Soft permanent connection (SPC): initiated by the management plane Good for applications required for dynamic circuits (holding time ~ provisioning time) Allow reconfiguring or modifying connections for existing calls Perform protection and restoration function
43
42 ASON Control Plane Components Components of ASON control plane Call Controller Connection Controller Link Resource Manager Routing Controller Discovery Agent Termination and Adaptation Performer Etc.
44
43 Related Standard Bodies ITU ASON Architecture and Components UNI and NNI interfaces IETF Generalized GMPLS Protocols Extends MPLS/IP protocols based on generalized interface requirements signaling (RSVP-TE and CR-LDP with GMPLS extensions) routing (OSPF-TE and IS-IS with GMPLS extensions) link management and neighbor discovery (LMP) OIF Focuses on application of IETF protocols in an overlay model Generates implementation agreements UNI and NNI
45
44 GMPLS: Generalized MPLS GMPLS Handles Nodes With Diverse Capabilities. Packet Switch Capable (PSC) Time Division Multiplexing Capable (TDM) Lambda Switch Capable (LSC) Fiber Switch Capable (FSC) Each Node Is Treated As an MPLS Label-switching Router (LSR) Lightpaths/TDM Circuits Are Considered Similar to Label-Switched Paths (LSPs) Selection of s and OXC ports are considered similar to selection of labels FSC Cloud LSC Cloud TDM Cloud PSC Cloud
46
45 Overview of IETF GMPLS Protocols GMPLS-based distributed control plane automatic service provisioning (signaling) dynamic network topology and resource availability dissemination (routing) neighbor discovery and link management (link management)
47
46 Control Channel Bi-directional channel is required between two logically or physically adjacent nodes to exchange control messages in-band with data (such as two IP routers, SONET overhead bytes) out-of-band through a separate link or even separate network (such as an IP network) de-couple data channel and control channel one control channel to one or multiple data channels data channel 1 (and control channel) data channel N IP control channel Link Bundle control channel
48
47 Connection Provisioning through GMPLS Connection request received from a client or a management agent at ingress node Ingress node computes the explicit route from ingress to egress node take into account a set of constraints (bandwidth requirements, resource availability, protection/restoration and traffic engineering constraints) Require routing protocol to disseminate network topology and link state information Signaling the connection establishment along the path RSVP-TE or CR-LDP extension Ingress Node (A) Egress Node (B) Request
49
48 Signaling Protocol Establishes and deletes paths LSP setup: label request and resource reservation/allocation LSP deletion: label and resource release GMPLS Signaling Extends MPLS label semantics to accommodate fiber, waveband, lambda, TDM and packet-capable LSP establishment Extends RSVP-TE and CR-LDP for carrying the generalized label objects over explicit path Supports bi-directional LSP setup Suggested Label Upstream node suggests a label to downstream node for speeding up configuration Label Set Limit the labels what downstream node can choose from
50
49 Routing Protocol Disseminates network topology and link resource availability over control channel (CC) Manages the link state database and routing tables make routing decision Provides path computation algorithm with the routing information to obtain explicit route Traffic engineering (TE) and GMPLS routing extensions Extends OSPF or IS-IS Support multiple types of GMPLS TE links Carry new link attributes TE LSA database for explicit path computation
51
50 Link Bundling Neighboring nodes (e.g. OXCs) connected by multiple parallel links For standard OSPF, each physical link between a pair of nodes forms a routing adjacency not scale well To improve routing scalability and reduce the amount of information handled by routing protocol, in GMPLS routing protocol aggregates and abstracts the attributes of the links with similar characteristics between a pair of nodes advertises as a single link bundle or Traffic Engineering (TE) link aggregation leads to information loss Control channel and data link may be separated Data Channel 1 Data Channel N Link Bundle Component Link
52
51 Link Management Protocol (LMP) Multiple fiber links between two adjacent nodes (e.g. OXC, photonic switches) Control channels may not use the same physical medium and interfaces as the data links Link Management Protocol (LMP) Provides the capability to manage control channel and data links between neighboring nodes
53
52 LMP functionality Control channel management establish and maintain LMP control channel connectivity between adjacent nodes. Link property correlation (link bundling management) synchronize TE link (link bundle) properties and verify the TE link properties one CC per one or more link bundles Link connectivity verification data link physical connectivity discovery mis-configuration and mis-wiring detection Fault management localize and handle data link failure Service discovery automatic discovery of services offered by the network including signaling protocol type, link and data signal type, transparency level etc...
54
53 LMP Different Operation Modes In-fiber and in-band control channel one CC per data component link e.g. using SONET/SDH overhead bytes control channel management and data link management can be done together neighbor discovery mis-configuration and mis-wiring detection Out-of-fiber control channel (Ethernet) or in-fiber dedicated channel, one CC per multiple component links or multiple link bundles transparent devices that the data is not modified or examined in normal operation.e.g. photonic switches test messages are used for data link neighbor discovery and connectivity verification
55
54 LMP In-band Control CID: Channel ID config: HelloInterval and helloDeadInterval hello: TxSeqNum and RcvSeqNumk Node ANode B Config (local CID, msg ID, local node ID, config) ConfigAck (local CID, local node ID, remote CID, msg ID ACK, remote node ID) ConfigNack (local CID, local node ID, remote CID, msg ID ACK, remote node ID, config) ConfigAck (local CID, local node ID, remote CID, msg ID ACK, remote node ID) Config (local CID, msg ID, local node ID, config) Hello (local CID, hello) Parameter Negotiation Keep-alive
56
55 LMP Out-Of-Band Control Most LMP messages are send out-of-band through the control channel In-band Test messages are sent for link verification and correlation TE link (link bundle) is disseminated over routing protocol Routing flooding adjacencies are maintained over control channel and data forwarding adjacencies (FA) are maintained over component links Data Channel 1 Data Channel N Control Channel (CC) Link Bundle Test Messages BeginVerify (control channel) BeginVerifyAck (control channel) Test (data link) TestStatusAck (control channel) TestStatusSuccess (control channel). Test other data component links. EndVerifyAck
57
56 Unified Control Plane UNI - User-to-Network Interface I-NNI - Internal Network-to-Network Interface E-NNI - External Network-to-Network Interface Optical Network Optical subnet Optical subnet Optical subnet UNI E-NNI I-NNI ATM Network IP Network ATM Network IP Network ATM Network
58
57 User-to-Network Interface (UNI) UNI supports establishment of connections between the client nodes over an OTN (overlay model) Re-use IETF GMPLS protocols signaling: RSVP-TE, CR-LDP with UNI specific extensions neighbor and service discovery: LMP with UNI specific extensions Transport network assigned address (TNA) an address assigned to a client by the transport service provider a globally unique address, can be IPv4, IPv6 or NSAP UNI is used at the edge of the cloud Inside the cloud - LMP, GMPLS signaling and routing OTN UNI LMP Signaling LMP Signaling LMP Signaling/Routing End-to-end path Client
59
58 UNI Connection Setup Using GMPLS RSVP-TE Source UNI-C Path Resv + MESSAGE_ID_ACK Destination UNI-C Ingress UNI-N Egress UNI-N Path Resv ResvConf ACK ResvConf + MESSAGE_ID_ACK ACK UNI Transport Connection Established Source UNI-C may start transmitting Destination UNI-C may start transmitting
60
59 Network-to-Network Interface Inter-domain signaling: extends GMPLS signaling protocol, e.g. RSVP Inter-domain routing extends GMPLS IGP routing protocols: e.g. multi-area OSPF, IS-IS extends inter-domain routing protocol (BGP) to exchange topology information across domain boundaries abstraction and summarization of intra-domain routing information Neighbor discovery and link management: LMP OTN1 Signaling End-to-end path UNI ENNI Client Routing
61
60 Path Protection and Restoration in OTN Dedicated 1+1 Protection Primary and protection path diversified During normal operation mode, both paths are completely provisioned, carry the optical data traffic and the egress elects the best copy of the two Primary and protection path provisioning through GMPLS signaling protocols, e.g. RSVP No delay but not efficient in terms of netwok resource utilization A B C D E F G H primary protection
62
61 Shared Mesh Protection and Restoration Shared mesh restoration path is pre-computed and pre-provisioned Resource is reserved on the links but no cross-connects are created along the restoration path The complete establishment of the restoration path occurs only after the primary path fails The common restoration resource reserved on a link may be shared by multiple restoration paths to restore multiple primary paths In order to avoid contention during a single node failure, two restoration paths may share the common reserved restoration resource only if their respective working paths are mutually node disjoint. The bandwidth reserved for restoration on a link can be smaller than the total bandwidth required by all the working paths recovered by it the resource reserved for restoration can also be used for low priority pre-emptible traffic in normal operating mode Efficient but with a delay A B C D E F G H primary Shared restoration channel
63
62 GMPLS Control Plane Prototype
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.