Download presentation
1
MPLS Technology Overview
2
Outline MPLS Overview MPLS Framework MPLS Applications
MPLS Architecture Conclusion
3
MPLS Overview -- How it works
The IETF MPLS working group (created in 1997) is to standardize a base technology that integrates the label swapping forwarding paradigm with network layer routing. Current status: A framework document has been published as Internet draft. This draft discusses technical issue and requirements for the MPLS. An architecture document has been published as Internet draft. This draft contains a draft protocol architecture for MPLS. The proposed architecture is based on the MPLS framework document. An Internet draft that discuss MPLS with Frame Relay has been published. Cisco System Inc. is the major contributor to the MPLS working group. substitute “Label” for “Tag” in Tag MPLS
4
Core mechanisms of MPLS
Semantics assigned to a stream label Labels are associated with specific streams of data. Forwarding Methods Forwarding is simplified by the use of the short fixed length labels to identify streams. Forwarding may require simple functions such as looking up a label in a table, swapping labels, and possibly decrementing and checking a TTL. In some case MPLS may direct uses of underlying layer 2 forwarding. Label Distribution Methods Allow nodes to determine which labels to use for specific streams. This may use some sort of control exchange, and/or be piggybacked on a routing protocol.
5
Motivation for MPLS Benefits relative to use of a Router Core
Simplified forwarding Efficient explicit routing Traffic reengineering QoS routing Complex mappings from IP packet to forwarding equivalence class (FEC) Partitioning of functionality Single forwarding paradigm with several level differentiation Benefits relative to use of an ATM or Frame Relay Core Scaling of the routing protocol Common operation over packet and cell media Easier Management Elimination of the ‘routing over Large Clouds’ issue
6
MPLS Related Protocols
Data forwarding Label encapsulation Label operations: PUSH, SWAP and POP Label distribution protocols (RFC 3036) Provide procedures by which one LSR informs another of the label/FEC binding Extensions to routing protocols Existing routing protocols can be extended to distribute traffic engineering information
7
MPLS Framework The framework document discusses the core MPLS components, observations, issues, assumptions, and technical approach. Core MPLS components: the Basic routing approach, Labels, and Encapsulation Observations, Issues, and Assumptions Layer 2 versus Layer 3 forwarding, Scaling issues, Types of streams, and Data driven versus control driven label assignment. Technical approach Label distribution, Stream Merging, Loop handling, Interoperation with NHRP, Operation in a hierarchy, Interoperation with “conventional “ ATM, Multicast, Mutipath, Host interactions, Explicit Routing, Traceroute, LSP Control: Egress versus local, and security.
8
Key Terminology in MPLS
FEC (Forwarding Equivalence Class) A group of IP packets which are forwarded in the same manner (e.g., over the same path, with the same priority and the same label) Label A short fixed length identifier which is used to identify a FEC Label Swapping Looking up the incoming label to determine the outgoing label, encapsulation and port Label Switched Path (LSP) Path through one or more LSRs for a particular FEC Label Switching Router (LSR) An MPLS capable router
9
What is a Label The label can be carried in a layer 2 header (e.g., ATM and frame relay) or in a “shim” that sits between the layer 2 header and IP (e.g., LAN and PPP) Layer 2 “shim” IP Payload Labels may be stacked. For example, one label can be used to steer the packet within an AS and another label can be used to steer the packet through Ases. Label value (20 bits) Exp S TTL 4 Octets Exp: Experimental (3 bits) S: Bottom of label stack (1 bit) TTL: Time-To-Live (8 bits)
10
Data Forwarding Edge LSR (Ingress) (Egress) LSR Label IP (Penultimate)
PUSH POP SWAP L2 header
11
A simplified LSR forwarding engine
Next hop + port Queuing and Scheduling rules Switching Table Output Ports Input Ports MPLS label MPLS payload
12
Ingress and Transit Operation
FEC Output /16 port 4 PUSH label 40 Input Output port 2 label 40 port 3 SWAP label 45 To: Label: 40 Label: 40 Label: 45 Port 1 Port 4 Port 2 Port 3 Ingress LSR LSR
13
Egress Operation The egress router has to do two table lookups
There is a concern that this might cause a performance penalty on the egress router Solution: Penultimate Hop Popping (PHP) Input Output port 1 label POP FEC Output Next Hop /16 Port Label: 45 To: To: Port 1 Port 4 Egress LSR
14
Routing Aggregation Access 1 4 1 Access 3 2 Access 2 5 3 Destination D
15
Per-Hop classification, queuing, and scheduling
Queue S Classify Port 1 Port N Port M
16
PHP with Explicit NULL Egress router returns a label value of 0 during signaling Input Output Port 2 label 40 Port 3 SWAP label 0 FEC Output Next Hop /16 Port Label: 40 Label: 0 Label: 0 To: Port 2 Port 3 Port 1 Port 4 Penultimate LSR Egress LSR
17
PHP with Implicit NULL Egress router returns a label value of 3 during signaling Penultimate LSR pops the label Input Output port 2 label 40 port 3 POP FEC Output Next Hop /16 Port Label: 40 To: To: To: Port 2 Port 3 Port 1 Port 4 Penultimate LSR Egress LSR
18
Label Distribution Protocols
How do routers know what labels to use? They need a label distribution protocol There are a number of possible label distribution methods: Manual MPLS-BGP (MP-iBGP-4) Resource Reservation Protocol-Traffic Engineering (RSVP-TE) (RFC 2205, RFC 2210) Label Distribution Protocol (LDP) Constraint-Based LDP (CR-LDP)
19
Label Distribution Modes
Downstream-on-Demand LSR requests its next hop for a label for a particular FEC Downstream Unsolicited LSR distributes bindings to LSRs that have not explicitly requested them For example, topology driven Only LDP and MPLS-BGP support Downstream Unsolicited mode
20
Manual Configuration Labels are manually configured
Useful in testing or to get around signaling problems R1 (Ingress) R2 R3 R4 (Egress) LSP /16 Nexthop R2 Push 40 Label 40 Nexthop R3 Swap 45 Label 45 Nexthop R4 Swap 50 Label 50 Pop
21
MPLS-BGP Use MP-iBGP-4 to distribute label information as well as VPN routes BGP peers can send route updates and the associated labels at the same time Route reflectors can also be used to distribute labels to increase scalability
22
Forwarding Component Label Stack and Forwarding Operations
The basic forwarding operation consists of looking up the incoming label to determine the outgoing label, encapsulation, port, and any additional information which may pertain to the stream such as a particular queue or other QoS related treatment. This operation is referred as label swap. When a packet first enters an MPLS domain, the packet is associated with a label. It is referred as a label push. When a packet leaves an MPLS domain, the label is removed. It is referred as a label pop. The label stack is useful within hierarchical routing domain.
23
Encapsulation Label-based forwarding makes use of various pieces of information, including a label or stack of labels, and possibly additional information such as a TTL field. These information can be carried in several forms. The term “MPLS encapsulation” is used to refer to whatever form is used to encapsulate the label information and information used for label based forwarding. An encapsulation scheme may make use of the following fields: label, TTL, class of service, stack indicator, next header type indicator, and checksum
24
MPLS label stack encoding
Stack top Stack bottom Original Packet Label (20 bits) Label (20 bits) Label (20 bits) Exp (3 bits) Exp (3 bits) Exp (3 bits) ... COS S (1 bit) S (1 bit) S (1 bit) TTL (8 bits) TTL (8 bits) TTL (8 bits) MPLS frame delivered to link layer
25
Label Assignment Topology driven (Tag) Request driven (RSVP)
In response to normal processing of routing protocol control traffic Labels are pre-assigned; no label setup latency at forwarding time. Request driven (RSVP) In response to normal processing of request based control traffic May require a large number of labels to be assigned. Traffic driven (Ipsilon) The arrival of data at an LSR triggers label assignment and distribution. Label setup latency; potential for packet reordering.
26
Label Distribution Explicit Label Distribution
Downstream label allocation label allocation is done by the downstream LSR most natural mechanism for unicast traffic Upstream label allocation label allocation is done by the upstream LSR may be used for optimality for some multicast traffic A unique label for an egress LSR within the MPLS domain Any stream to a particular MPLS egress node could use the label of that node.
27
Label Distribution Explicit Label Distribution Protocol (LDP)
Reliability : by transport protocol (TCP) or as part of LDP. Separate routing computation and label distribution. Piggybacking on Other Control Messages Use existing routing/control protocol for distributing routing/control and label information. OSPF, BGP, RSVP, PIM Combine routing and label distribution. Label purge mechanisms By time out Exchange of MPLS control packets
28
Label Distribution Protocol
LDP Peer: Two LSRs that exchange label/stream mapping information via LDP LDP messages Discovery messages (via UDP) announce and maintain the presence of LSR Session messages maintain session between LDP peers Advertisement message label operation (Label distribution) Notification message advisory information and signal error information Error notification: signal fatal errors Advisory notification: status of the LDP session or some previous message received from the peer.
29
RSVP-TE Traffic engineering extensions added to RSVP
Sender and receiver are ingress and egress LSRs New objects have been defined Supports Downstream on Demand label distribution PATH messages used by sender to solicit a label from downstream LSRs RESV messages used by downstream LSRs to pass label upstream towards the sender
30
RSVP-TE Operation Edge LSR (Ingress) LSR LSR Edge LSR (Egress) PATH
(Label Request) PATH (Label Request) PATH (Label Request) RESV Label = 40 RESV Label = 45 RESV Label = 50 RESVCONF RESVCONF RESVCONF
31
RSVP-TE Operation with PHP
Edge LSR (Ingress) LSR LSR (Penultimate) Edge LSR (Egress) PATH (Label Request) PATH (Label Request) PATH (Label Request) RESV Label = 40 RESV Label = 45 RESV Label = 0 or 3 RESVCONF RESVCONF RESVCONF
32
LDP Supports Downstream on Demand and Downstream Unsolicited
No support for QoS or traffic engineering UDP used for peer discovery TCP used for session, advertisement and notification messages Uses Type-Length-Value (TLV) encoding
33
CR – LDP Extensions to LDP that convey resource reservation requests for user and network constraints CR-LDP uses TCP sessions between LSR peers to send LDP messages A mechanism for establishing explicitly routed LSPs An Explicit Route is a Constrained Route Ingress LSR calculates entire route based on Traffic Engineering Database (TED) and known constraints
34
CR-LDP Operation Edge LSR (Ingress) LSR LSR Edge LSR (Egress)
Label Request Label Request Label Request Label Mapping Label = 40 Label Mapping Label = 45 Label Mapping Label = 50
35
CR-LDP vs RSVP-TE Signaling Attributes LSP Attributes
Traffic Engineering Attributes Reliability & Security Mechanisms
36
Signaling Attributes CR-LDP LDP TCP Hard Yes No RSVP-TE RSVP Raw IP
Soft Yes No Underlying Protocol Transport Protocol Protocol State Multipoint-to-Point Multicasting
37
LSP Attributes CR-LDP Strict & Loose Yes RSVP-TE Strict & Loose Yes
Explicit Routing Route Pinning LSP Re-Routing LSP Preemption LSP Protection LSP Merging LSP Stacking
38
Traffic Engineering Attributes
CR-LDP Forward Path RSVP-TE Reverse Path Traffic Control CR-LDP Negotiates resources during the Request process Confirms resources during the Mapping process LSPs are setup only if resources are available Ability exists to allow for negotiation of resources
39
Traffic Engineering Attributes
CR-LDP Forward Path RSVP-TE Reverse Path Traffic Control RSVP-TE Passes resource requirements to the Egress LER Egress LER converts the Tspec into a Rspec Resource reservations occur on RESV process
40
Reliability & Security Attributes
CR-LDP Yes RSVP-TE Yes Link Failure Detection Failure Recovery Security Support
41
Signaling Protocols Each protocol has strengths & weaknesses
CR-LDP is based upon LDP giving it an advantage of using a common protocol RSVP-TE is more deployed than CR-LDP giving it an early lead in the marketplace
42
Modifications to Routing Protocols
Modifications are also required to distribute traffic engineering topology including bandwidth and administrative constraints Information used to build up the traffic engineering database Available bandwidth Metric Resource class/color OSPF-TE Opaque LSAs used to carry TE information IS-IS-TE New TLVs used to carry TE information
43
Traffic Engineering Database
BW=100M R,O Metric=1 BW=100M G,B,O Metric=1 BW=10M G,O Metric=5 BW=1M B,G,R Metric=1 BW=50M R,O Metric=2 Ingress Egress BW=50M G,R Metric=1 BW=10M G,R Metric=2 G=Green R=Red O=Orange B=Blue BW=100M G,R,O Metric=1
44
Selecting a Path How to select a 2M path which excludes any blue links? First prune the links BW=50M G,R Metric=1 R,O Metric=2 BW=100M G,R,O BW=10M G,O Metric=5 Ingress Egress
45
Selecting a Path Now select the shortest path BW=10M G,O Metric=5
G,R Metric=1 R,O Metric=2 BW=100M G,R,O BW=10M G,O Metric=5 Ingress Egress
46
Explicit Route Once the path has been determined, the ingress router will typically signal the path using the Explicit Route Option (ERO) or ER-TLV R1 R2 R3 R4 R5 R6 LSP to R6 strict R4 strict R5 PATH
47
FEC and LSP How does an LSR associate a FEC with an LSP? Independent
RFC3031 (Multiprotocol Label Switching Architecture)defines two possible methods Independent Each LSR, upon recognizing a particular FEC, makes an independent decision to bind a label to it Ordered An LSR binds a label to a FEC only if it is the egress LSR for that FEC, or if it has already received a label binding for that FEC from its next hop
48
Example: BGP and LSP LSP1 R2 R1 LSP2 /16 /16 R3 AS 1 AS 2 R1 learns about the prefixes of AS2 from R2 via I-BGP and creates a LSP
49
Example: BGP and LSP R1 learns about the prefixes of AS2 from R2 over IBGP session Without MPLS, R1 forwards traffic destined for AS2 towards R2 based on the IGP topology database With MPLS, one (or more) LSP tunnels are pre-established between R1 and R2 This is usually a manual process taking into account traffic engineering requirements If multiple LSPs exist between the two BGP routers, route filtering can be used to direct the traffic R1 binds the FEC ( /16) to LSP1 and uses it to forward traffic to R2
50
MPLS Applications Traffic engineering Virtual Private Networks (VPNs)
Layer 3 (BGP/MPLS VPN’s, RFC 2547) Layer 2, point to point, Virtual Private Lan (Martini, Kompella, etc.) Enhanced route protection MPLS and DiffServ
51
Traffic Engineering Traditional routing selects the shortest path
All traffic between the ingress and egress nodes passes through the same links causing congestion Traffic engineering allows a high degree of control over the path that packets take Allows more efficient use of network resources Traffic redirection through BGP or IGP shortcut Improved resource utilization Load balancing
52
Example: Traffic Redirection
LSPs can be established so that BGP and IGP traffic traverse different links LSP Tunnel BGP router traffic IGP In this example, only BGP traffic is allowed to use the LSP
53
Example: Improved Utilization
Engineer LSP tunnels to avoid resources that are already congested BGP or IGP will use LSP tunnel instead of the normal routed path Congested node LSP Tunnel
54
Load Balancing Two LSPs established along two paths
BGP traffic routed along both paths R5 R1
55
MPLS VPN Components VPN Site Provider Router Provider Edge Customer Edge CE CE P PE PE CE P PE Customer Edge device: device located on customer premises Provider Edge device: maintains VPN-related information, exchanges VPN information with other Provider Edge devices, encapsulates/decapsulates VPN traffic Provider router: forwards traffic VPN-unaware
56
Layer 2 and Layer 3 MPLS VPNs
VPNs based on a layer 2 (Data Link Layer) technology and managed at that layer are defined as layer 2 VPNs Martini, Kompella etc. VPNs based on tunneling at layer 3 (Network or IP Layer) are Layer 3 VPNs RFC 2547 bis
57
PE – CE Routing Connections
VPN Routing & Forwarding instance (VRF) for each VPN on each PE Flexible addressing Support overlapping IP addresses and private IP address space Secure Customer packets are only placed in customers VPN Customers can use different IGP; Static, RIP, OSPF or BGP Each VRF contains customer routes
58
PE – PE Routing Connections
CE CE P PE PE CE P PE MP-iBGP used between PE’s to distribute VPN routing information. PE routers are full mesh MP-iBGP Multiprotocol Extensions of BGP propagate VPN-IPv4 routes PE and P routers run IGP and label distribution protocol P routers are VPN unaware
59
Traffic Forwarding Ingress PE router receives IP packet/Frame from CE
Ingress PE router does lookup and adds label stack P router switches the packet/frame based on the top label (gray) Egress PE router removes the top label Egress PE router uses bottom label (red) to select VPN Egress PE removes bottom label and forwards IP packet/frame to CE
60
Enhanced Route Protection
Head-end Reroute If a link along the path fails, the ingress node is notified The ingress node must recompute another path and then set up the new path Protection Switching Pre-establish two paths for an LSP for redundancy If a link along the primary path fails, the ingress node switches over to the secondary path Fast Reroute Each node precomputes and preestablishes a path to bypass potential failures in the downstream link or node
61
Example: Protection Switching
Failure Ingress Router Failure Link failure Primary Path Secondary Path When ingress router is notified of the link failure, it switches all traffic to the secondary path.
62
DiffServ Scalability via Aggregation
Flows Aggregation on Edge Many flows associated with a Class (marked with DSCP) Aggregated Processing in Core Scheduling/Dropping (PHB) based on DSCP DiffServ scalability comes from: - aggregation of traffic on Edge - processing of Aggregate only in Core
63
MPLS Scalability via Aggregation
Flows Aggregation on Edge Many flows associated with a Forwarding Equivalent Class (marked with label) Aggregated Processing in Core Forwarding based on label MPLS scalability comes from: - aggregation of traffic on Edge - processing of Aggregate only in Core
64
MPLS and Diffserv Flows MPLS: flows associated with FEC, mapped into one label DS: flows associated with Class, mapped to DSCP MPLS: Switching based on Label DS: scheduling/Dropping based on DSCP Same scalability goals: - aggregation of traffic on Edge - processing of Aggregate only in Core
65
E-LSP and L-LSP Information on DiffServ must be made visible to LSR in MPLS. E-LSP (<= 8 PHB) (PHB = Per Hob Behavior) EXP-Inferred-PSC LSP A single LSP can support up to eight BA’s EXP (3-bits) maps LSP using drop precedence (3-bits) L-LSP (<= 64 PHB ) Label-Only-Inferred-PSC LSP A separate LSP for a single FEC / BA (OA) pair Label maps LSP using DSCP (6-bits) Defined for both CR-LDP and RSVP-TE
66
MPLS Architecture The draft MPLS architecture is based on the MPLS framework. MPLS architecture document gives precise definitions and operations of the MPLS. The details of this architecture are not introduced here.
67
Conclusion MPLS has emerged as a promising technology that will improve the scalability of hop-by-hop routing and forwarding, and provide traffic engineering capabilities for better network provisioning. It decouples forwarding from routing and allows multiprotocol support without requiring changes to the basic forwarding paradigm.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.