Presentation is loading. Please wait.

Presentation is loading. Please wait.

Traffic Engineering with MPLS

Similar presentations


Presentation on theme: "Traffic Engineering with MPLS"— Presentation transcript:

1 Traffic Engineering with MPLS
MPLS Welcome  1

2 Agenda Introduction to traffic engineering Signaling LSPs with RSVP
Brief history Vocabulary Requirements for Traffic Engineering Basic Examples Signaling LSPs with RSVP RSVP signaling protocol RSVP objects Extensions to RSVP Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  2

3 Agenda Constraint-based traffic engineering Traffic protection
Extensions to IS-IS and OSPF Traffic Engineering Database User defined constraints Path section using CSPF algorithm Traffic protection Secondary LSPs Hot-standby LSPs Fast Reroute Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  3

4 Agenda Advanced traffic engineering features
Circuit cross connect (CCC) IGP Shortcuts Configuring for transit traffic Configuring for internal destinations Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  4

5 Introduction to Traffic Engineering
MPLS Welcome  5

6 What problem are we trying to solve with Traffic Engineering?
Why Engineer Traffic? What problem are we trying to solve with Traffic Engineering? Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  6

7 Brief History Early 1990’s Internet core was connected with T1 and T3 links between routers Only a handful of routers and links to manage and configure Humans could do the work manually IGP (Interior Gateway Protocol) Metric-based traffic control was sufficient In the early 1990s, ISP networks were composed of routers interconnected by leased lines—T1 (1.5-Mbps) and T3 (45-Mbps) links. Traffic engineering was simpler then—metric-based control was adequate because Internet backbones were much smaller in terms of the number of routers, number of links, and amount of traffic. Also, in the days before the tremendous popularity of the any-to-any WWW, the Internet’s topological hierarchy forced traffic to flow across more deterministic paths. Current events on the network (for example, John Glenn and the Starr Report) did not create temporary hot spots. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  7

8 IGP Metric-Based Traffic Engineering
Traffic sent to A or B follows path with lowest metrics 1 1 A B The figure above shows metric-based traffic engineering in action. When sending large amounts of data to network A, traffic is routed through the top router because it has a lower overall cost. 1 2 C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  8

9 IGP Metric-Based Traffic Engineering
Drawbacks Redirecting traffic flow to A via C causes traffic for B to move also! Some links become underutilized or overutilized 1 4 Rerouting traffic for router A by raising metrics along the current path has the desired effect of forcing the traffic to use router C, but has the unintended effect of causing traffic destined for B to do the same. Since interior gateway protocol (IGP) route calculation was topology driven and based on a simple additive metric such as the hop count or an administrative value, the traffic patterns on the network were not taken into account when the IGP calculated its forwarding table. As a result, traffic was not evenly distributed across the network's links, causing inefficient use of expensive resources. Some links became congested, while other links remained underutilized. This might have been satisfactory in a sparsely-connected network, but in a richly-connected network (that is, bigger, more thickly meshed and more redundant) it is necessary to control the paths that traffic takes in order to balance loads. IGP is topology driven as opposed to being resource driven! A B 1 2 C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  9

10 IGP Metric-Based Traffic Engineering
Drawbacks Only serves to move problem around Some links underutilized Some links overutilized Lacks granularity All traffic follows the IGP shortest path Continuously adjusting IGP metrics adds instability to the network As Internet service provider (ISP) networks became more richly connected, it became more difficult to ensure that a metric adjustment in one part of the network did not cause problems in another part of the network. Traffic engineering based on metric manipulation offers a trial-and- error approach rather than a scientific solution to an increasingly complex problem. IGP metric manipulation has a “snap” effect when it comes to redirecting traffic… (not an “even” distribution) Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  10

11 Discomfort Grows Mid 1990’s
ISPs became uncomfortable with size of Internet core Large growth spurt imminent Routers too slow IGP metric engineering too complex IGP routing calculation was topology driven, not traffic driven Router based cores lacked predictability Metric-based traffic controls continued to be an adequate traffic engineering solution until 1994 or At this point, some ISPs reached a size at which they did not feel comfortable moving forward with either metric-based traffic controls or router-based cores. Traditional software-based routers had the potential to become traffic bottlenecks under heavy load because their aggregate bandwidth and packet-processing capabilities were limited. It became increasingly difficult to ensure that a metric adjustment in one part of a huge network did not create a new problem in another part. And router-based cores did not offer the high-speed interfaces or deterministic performance that ISPs required as they planned to grow their core networks. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  11

12 Why Traffic Engineering?
There is a need for a more granular and deterministic solution “A major goal of Internet Traffic Engineering is to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and performance.” RFC 2702 Requirements for Traffic Engineering over MPLS Metric-based traffic controls continued to be an adequate traffic engineering solution until 1994 or At this point, some ISPs reached a size at which they did not feel comfortable moving forward with either metric-based traffic controls or router-based cores. Traditional software-based routers had the potential to become traffic bottlenecks under heavy load because their aggregate bandwidth and packet-processing capabilities were limited. It became increasingly difficult to ensure that a metric adjustment in one part of a huge network did not create a new problem in another part. And router-based cores did not offer the high-speed interfaces or deterministic performance that ISPs required as they planned to grow their core networks. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  12

13 Overlay Networks are Born
ATM switches offered performance and predictable behavior ISPs created “overlay” networks that presented a virtual topology to the edge routers in their network Using ATM virtual circuits, the virtual network could be reengineered without changing the physical network Benefits Full traffic control Per-circuit statistics More balanced flow of traffic across links Asynchronous Transfer Mode (ATM) switches offered a solution when ISPs required more bandwidth to handle increasing traffic loads. The ISPs who migrated to ATM-based cores discovered that ATM permanent virtual circuits (PVCs) offered precise control over traffic flow across their networks. ISPs came to rely on the high-speed interfaces, deterministic performance, and PVC functionality that ATM switches provided. Around 1994 or 1995, Internet traffic became so high that ISPs had to migrate their networks to support trunks that were larger than T3 (45 Mbps). Fortunately, OC-3 ATM interfaces (155 Mbps) became available for switches and routers. To obtain the required speed, ISPs had to redesign their networks so that they could use higher speeds supported by a switched core. An ATM-based core fully supports traffic engineering because it can explicitly map PVCs. Mapping PVCs is done by provisioning an arbitrary virtual topology on top of the network's physical topology. PVCs are mapped from edge to edge to precisely distribute traffic across all links so that they are evenly utilized. This approach eliminates the traffic-magnet effect of least- cost routing, which results in overutilized and underutilized links. The traffic engineering capabilities supported by ATM PVCs made the ISPs more competitive within their market, so they could provide better service to their customers at a lower cost. Per-PVC statistics provided by the ATM switches facilitate monitoring traffic patterns for optimal PVC placement and management. Network designers initially provision each PVC to support specific traffic engineering objectives, and then they constantly monitor the traffic load on each PVC. If a given PVC begins to experience congestion, the ISP has the information it needs to remedy the situation by modifying either the virtual or physical topology to accommodate shifting traffic loads. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  13

14 Overlay Networks ATM core ringed by routers
PVCs overlaid onto physical network A Physical View B C In an ATM overlay network, routers surround the edge of the ATM cloud. Each router communicates with every other router by a set of PVCs that are configured across the ATM physical topology. The PVCs function as logical circuits, providing connectivity between edge routers. The routers do not have direct access to information describing the physical topology of the underlying ATM infrastructure. The routers have knowledge only of the individual PVCs that appear to them as simple point-to-point circuits between two routers. The illustration above shows how the physical topology of an ATM core differs from the logical IP overlay topology. The distinct ATM and IP networks “meet” when the ATM PVCs are mapped to router logical interfaces. Logical interfaces on a router are associated with ATM PVCs, and then the routing protocol works to associate IP prefixes (routes) with the subinterfaces. Finally, ATM PVCs are integrated into the IP network by running the IGP across each of the PVCs to establish peer relationships and exchange routing information. Between any two routers, the IGP metric for the primary PVC is set such that it is more preferred than the backup PVC. This guarantees that the backup PVC is used only when the primary PVC is not available. Also, if the primary PVC becomes available after an outage, traffic automatically returns to the primary PVC from the backup. A Logical View C B Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  14

15 Path Creation Off-line path calculation tool uses
Link utilization Historic traffic patterns Produces virtual network topology Primary and backup PVCs Generates switch and router configurations The physical paths for the overlay network are typically calculated by an off-line configuration utility on an as-needed basis—when congestion occurs, a new trunk is added or a new POP (point of presence) is deployed. The PVC paths and attributes are globally optimized by the configuration utility based on link capacity and historical traffic patterns. The configuration utility can also calculate a set of secondary PVCs that is ready to respond to failure conditions. Finally, after the globally optimized PVC mesh has been calculated, the supporting configurations are downloaded to the routers and ATM switches to implement the single or double full-mesh logical topology. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  15

16 Overlay Network Drawbacks
Growth in full mesh of ATM PVCs stresses everything With 5 routers, adding 1 requires only 10 new PVCs With 200 routers, adding 1 requires 400 new PVCs From 39,800 to 40,200 PVCs total Router IGP runs out of steam Practical limitation of atomically updating configurations in each switch and router Not well integrated Network does not participate in path selection and setup A network that deploys a full mesh of ATM PVCs exhibits the traditional “n-squared” problem. For relatively small or moderately sized networks, this is not a major issue. But for core ISPs with hundreds of attached routers, the challenge can be quite significant. For example, when expanding a network from five to six routers, an ISP must increase the number of simplex PVCs from 20 to 30. However, increasing the number of attached routers from 200 to 201 requires the addition of 400 new simplex PVCs—an increase from 39,800 to 40,200 PVCs. These numbers do not include backup PVCs or additional PVCs for networks running multiple services that require more than one PVC between any two routers. A number of operational challenges are caused by the "n-squared" problem: New PVCs must be mapped over the physical topology New PVCs must be tuned so that they have minimal impact on existing PVCs The large number of PVCs might exceed the configuration and implementation capabilities of the ATM switches The configuration of each switch and router in the core must be modified Deploying a full mesh of PVCs also stresses the IGP. This stress results from the number of peer relationships that must be maintained, the challenge of processing "n-cubed" link-state updates in the event of a failure, and the complexity of performing the Dijkstra calculation over a topology containing a significant number of logical links. Any time the topology results in a full mesh, the impact on the IGP is a suboptimal topology that is extremely difficult to maintain. As an ATM core expands, the "n-squared" stress on the IGP compounds. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  16

17 Overlay Network Drawbacks
ATM cell overhead Approximately 20% of bandwidth OC-48 link wastes 498 Mbps in ATM cell overhead OC-192 link wastes 1.99 Gbps ATM SAR speed OC-48 SAR Trailing behind the router curve Very difficult to build OC-192 SAR? A cell tax is introduced when packet-oriented protocols such as IP are carried over an ATM infrastructure. Assuming a 20% overhead for ATM when accounting for framing and realistic distribution of packets size, on a Gbps OC-48 link, 1.99 Gbps is available for customer data and 498 Mbps—almost a full OC-12— is required for the ATM overhead. When 10-Gbps OC-192 interfaces become available, 1.99 Gbps—almost a full OC-48 of the link's capacity— will be consumed by ATM overhead. ATM router interfaces have not kept pace with the latest increases in optical bandwidth. The fastest commercially available ATM segment and reassembly (SAR) router interface is OC OC-48 packet-over-SONET (POS) router interfaces are available today, but OC-48 ATM router interfaces will not be widely available in the near future. Soon, OC-192 POS interfaces (~10 Gbps) will be available for routers, but OC-192 ATM router interfaces might never be commercially available because of the expense and complexity of implementing the SAR function at such high speeds. The limits in SAR scaling mean that ISPs attempting to increase the speed of their networks using the IP-over-ATM model will have to purchase large ATM switches and routers with a large number of slower ATM interfaces. This will dramatically increase the expense and complexity of growing the network, which will only compound as ISPs consider a future migration to OC-192. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  17

18 Routers Caught Up Current generation of routers have Solution
High speed, wire-rate interfaces Deterministic performance Software advances Solution Fuse best aspects of ATM PVCs with high-performance routing engines Use low-overhead circuit mechanism Automate path selection and configuration Implement quick failure recovery There are many disadvantages of continuing the IP-over-ATM model when other alternatives are available. High-speed interfaces, deterministic performance, and traffic engineering using PVCs no longer distinguish ATM switches from Internet backbone routers. Furthermore, the deployment of a router-based core solves a number of inherent problems with the ATM model— the complexity and expense of coordinating two sets of equipment, the bandwidth limitations of ATM SAR interfaces, the cell tax, the “n-squared” PVC problem, the IGP stress, and the limitation of not being able to operate over a mixed-media infrastructure. The JUNOS operating system fuses the best aspects of ATM PVCs with the high-performance Packet Forwarding Engine. Using a low-overhead circuit-switching protocol with automatic path selection and maintenance, the entire network core can be optimized by distributing traffic engineering chores to each router. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  18

19 Benefits of MPLS Low-overhead virtual circuits for IP
Originally designed to make routers faster Fixed label lookup faster than longest match used by IP routing Not true anymore! Value of MPLS is now in traffic engineering One, integrated network Same forwarding mechanism can support multiple applications Traffic Engineering, VPNs, etc…. It is commonly believed that Multiprotocol Label Switching (MPLS) significantly enhances the forwarding performance of label-switching routers. It is more accurate to say that exact-match lookups, such as those performed by MPLS and ATM switches, have historically been faster than the longest match lookups performed by IP routers. However, recent advances in silicon technology allow ASIC-based route-lookup engines to run just as fast as MPLS or ATM virtual path identifier/virtual circuit identifier (VPI/VCI) lookup engines. The real benefit of MPLS is that it provides a clean separation between routing (that is, control) and forwarding (that is, moving data). This separation allows the deployment of a single forwarding algorithm—MPLS—that can be used for multiple services and traffic types. In the future, as ISPs need to develop new revenue-generating services, the MPLS forwarding infrastructure can remain the same while new services are built by simply changing the way packets are assigned to an LSP. For example, packets could be assigned to a label-switched path based on a combination of the destination subnetwork and application type, a combination of the source and destination subnetworks, a specific quality of service (QoS) requirement, an IP multicast group, or a VPN identifier. In this manner, new services can easily be migrated to operate over the common MPLS forwarding infrastructure. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  19

20 What are the fundamental requirements?
RFC 2702 Requirement for Traffic Engineering over MPLS Requirements Control Measure Characterize Integrate routing and switching All at a lower cost It is commonly believed that Multiprotocol Label Switching (MPLS) significantly enhances the forwarding performance of label-switching routers. It is more accurate to say that exact-match lookups, such as those performed by MPLS and ATM switches, have historically been faster than the longest match lookups performed by IP routers. However, recent advances in silicon technology allow ASIC-based route-lookup engines to run just as fast as MPLS or ATM virtual path identifier/virtual circuit identifier (VPI/VCI) lookup engines. The real benefit of MPLS is that it provides a clean separation between routing (that is, control) and forwarding (that is, moving data). This separation allows the deployment of a single forwarding algorithm—MPLS—that can be used for multiple services and traffic types. In the future, as ISPs need to develop new revenue-generating services, the MPLS forwarding infrastructure can remain the same while new services are built by simply changing the way packets are assigned to an LSP. For example, packets could be assigned to a label-switched path based on a combination of the destination subnetwork and application type, a combination of the source and destination subnetworks, a specific quality of service (QoS) requirement, an IP multicast group, or a VPN identifier. In this manner, new services can easily be migrated to operate over the common MPLS forwarding infrastructure. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  20

21 Fundamental Requirements
Need the ability to: Map traffic to an LSP Monitor and measure traffic Specify explicit path of an LSP Partial explicit route Full explicit route Characterize an LSP Bandwidth Priority/ Preemption Affinity (Link Colors) Reroute or select an alternate LSP It is commonly believed that Multiprotocol Label Switching (MPLS) significantly enhances the forwarding performance of label-switching routers. It is more accurate to say that exact-match lookups, such as those performed by MPLS and ATM switches, have historically been faster than the longest match lookups performed by IP routers. However, recent advances in silicon technology allow ASIC-based route-lookup engines to run just as fast as MPLS or ATM virtual path identifier/virtual circuit identifier (VPI/VCI) lookup engines. The real benefit of MPLS is that it provides a clean separation between routing (that is, control) and forwarding (that is, moving data). This separation allows the deployment of a single forwarding algorithm—MPLS—that can be used for multiple services and traffic types. In the future, as ISPs need to develop new revenue-generating services, the MPLS forwarding infrastructure can remain the same while new services are built by simply changing the way packets are assigned to an LSP. For example, packets could be assigned to a label-switched path based on a combination of the destination subnetwork and application type, a combination of the source and destination subnetworks, a specific quality of service (QoS) requirement, an IP multicast group, or a VPN identifier. In this manner, new services can easily be migrated to operate over the common MPLS forwarding infrastructure. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  21

22 MPLS Fundamentals MPLS Welcome  22

23 MPLS Header IP packet is encapsulated in MPLS header and sent down LSP
IP packet is restored at end of LSP by egress router TTL is adjusted by default IP Packet 32-bit MPLS Header MPLS is responsible for directing a flow of IP packets along a predetermined path across a network. This path is called a label-switched path. Label-switched paths are similar to ATM PVCs in that they are simplex in nature; that is, the traffic flows in one direction from the ingress router to a egress router. Duplex traffic requires two label-switched paths; that is, one path to carry traffic in each direction. A label-switched path is created by the concatenation of one or more label-switched hops, allowing a packet to be forwarded from one label-switching router to another label-switching router across the MPLS domain. A label-switching router is a router that supports MPLS-based forwarding. When an IP packet enters a label-switched path, the ingress router examines the packet and assigns it a label based on its destination, placing the label in the packet’s header. The label transforms the packet from one that is forwarded based on its IP routing information to one that is forwarded based on information associated with the label. The packet is then forwarded to the next router in the label-switched path. The key point in this scheme is that the physical path of the LSP is not limited to what the IGP would choose as the shortest path to reach the destination IP address. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  23

24 MPLS Header Label Experimental bits Stacking bit Time to live
TTL Label Used to match packet to LSP Experimental bits Carries packet queuing priority (CoS) Stacking bit Time to live Copied from IP TTL An MPLS header consists of: 20-bit label—Used to identify the packet to a particular LSP Class of service value—Indicates queuing priority through the network. At each hop along the way, the class of service value determines which packets receive preferential treatment within the tunnel. Stacking bit—Indicates that this MPLS packet has more than one label associated with it. The MPLS implementation in JUNOS Software supports a stacking depth of one. Time to live value—Contains a limit on the number of router “hops” this MPLS packet may travel through the network. It is decremented at each hop, and if the TTL value drops below one, the packet is discarded. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  24

25 Router Based Traffic Engineering
Standard IGP routing IP prefixes bound to physical next hop Typically based on IGP calculation /24 /16 New York San Francisco Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  25

26 Router Based Traffic Engineering
Engineer unidirectional paths through your network without using the IGP’s shortest path calculation IGP shortest path New York In the traditional layer 3 forwarding paradigm, as a packet travels from one router to the next an independent forwarding decision is made at each hop. The IP network-layer header is analyzed and the next hop is chosen based on this analysis and on the information in the routing table. In the JUNOS traffic engineering environment, the analysis of the packet header is performed just once—right before the packet enters the engineered path. The packet is assigned a label, which is a short, fixed-length value placed at the front of the packet. Routers in the traffic engineering path use labels as lookup indicies into the label forwarding table. For each label, this table stores forwarding information such as the router interface for which a labeled packet should be forwarded. San Francisco JUNOS traffic engineered path Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  26

27 Router Based Traffic Engineering
IP prefixes can now be bound to LSPs /24 New York San Francisco /16 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  27

28 MPLS Labels Assigned manually or by a signaling protocol in each LSR during path setup Labels change at each segment in path LSR swaps incoming label with new outgoing label Labels have “local significance” The ingress router and all subsequent routers in the label-switched path do not examine the IP routing information in the labeled packet—they use the label to look up information in their private label forwarding table. Then they replace the old label with a new label and forward the packet to the next router in the path. When the packet reaches the egress router, the egress router removes the label. The packet again becomes a native IP packet and is again forwarded based on its IP routing information. A single router can be part of multiple label-switched paths. It can be the ingress or egress router for one or more label-switched paths, and it also can be in the middle of one or more label-switched paths. The functions that each router supports depend on your network design. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  28

29 MPLS Forwarding Example
An IP packet destined to /32 arrives in SF San Francisco has route for /16 Next hop is the LSP to New York /16 IP New York San Francisco 1965 1026 Santa Fe Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  29

30 MPLS Forwarding Example
San Francisco prepends MPLS header onto IP packet and sends packet to first transit router in the path /16 New York San Francisco IP 1965 Santa Fe Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  30

31 MPLS Forwarding Example
Because the packet arrived at Santa Fe with an MPLS header, Santa Fe forwards it using the MPLS forwarding table MPLS forwarding table derived from mpls.0 switching table /16 New York San Francisco IP 1026 Santa Fe Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  31

32 MPLS Forwarding Example
Packet arrives from penultimate router with label 0 Egress router sees label 0 and strips MPLS header Egress router performs standard IP forwarding decision IP /16 New York Labels 0 through 15 are reserved labels, as specified in draft-ietf-mpls-label-encaps-07.txt. A value of 0 represents the "IPv4 Explicit NULL Label". This label value is only legal when it is the sole label stack entry. It indicates that the label stack must be popped, and the forwarding of the packet must then be based on the IPv4 header. A value of 1 represents the "Router Alert Label". This label value is legal anywhere in the label stack except at the bottom. When a received packet contains this label value at the top of the label stack, it is delivered to a local software module for processing. The actual forwarding of the packet is determined by the label beneath it in the stack. However, if the packet is forwarded further, the Router Alert Label should be pushed back onto the label stack before forwarding. The use of this label is analogous to the use of the "Router Alert Option" in IP packets. Since this label cannot occur at the bottom of the stack, it is not associated with a particular network layer protocol. A value of 2 represents the "IPv6 Explicit NULL Label".This label value is only legal when it is the sole label stack entry. It indicates that the label stack must be popped, and the forwarding of the packet must then be based on the IPv6 header. A value of 3 represents the "Implicit NULL Label". This is a label that an LSR may assign and distribute, but which never actually appears in the encapsulation. When an LSR would otherwise replace the label at the top of the stack with a new label, but the new label is "Implicit NULL", the LSR will pop the stack instead of doing the replacement. Although this value may never appear in the encapsulation, it needs to be specified in the Label Distribution Protocol, so a value is reserved. Values 4-15 are reserved for future use. IP San Francisco Santa Fe Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  32

33 Example Topology SmallNet BigNet E-BGP 10 10 10 30 30 20 20 30 20
Router X SmallNet Router Y Example Topology IGP Link Metric BigNet E-BGP 10 Router B Router C 10 10 Router D Router A 30 30 Router E 20 20 30 20 Router F Router G Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  33

34 Example Topology SmallNet BigNet Router X Router Y Router B Router C
.2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E .1 /30 /30 10.0.8/30 .2 .2 .1 Router F .1 /30 .2 Router G Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  34

35 Traffic Engineering Signaled LSPs
MPLS Welcome  35

36 Static vs Signaled LSPs
Static LSPs Are ‘nailed up’ manually Have manually assigned MPLS labels Needs configuration on each router Do not re-route when a link fails Signaled LSPs Signaled by RSVP Have dynamically assigned MPLS labels Configured on ingress router only Can re-route around failures Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  36

37 Signaled Label-Switched Paths
Configured at ingress router only RSVP sets up transit and egress routers automatically Path through network chosen at each hop using routing table Intermediate hops can be specified as “transit points” Strict—Must use hop, must be directly connected Loose—Must use hop, but use routing table to find it Advantages over static paths Performs “keepalive” checking Supports fail-over to unlimited secondary LSPs Excellent visibility Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  37

38 RSVP Path Signaling Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  38

39 Path Signaling JUNOS uses RSVP for Traffic Engineering
Internet standard for reserving resources Extended to support Explicit path configuration Path numbering Route recording Provides keepalive status For visibility For redundancy A label-switched path is unusable until it is actually established by the signaling component. The signaling component, which is responsible for establishing label-switched paths and label distribution, relies on the resource reservation protocol (RSVP). JUNOS uses RSVP as the signaling protocol for several reasons: RSVP was designed to be the resource reservation protocol of the Internet and “provide a general facility for creating and maintaining distributed reservation state across a set of multicast or unicast delivery paths.” Reservations are an important part of Traffic Engineering so it made sense to continue to use RSVP for this purpose rather than “reinventing the wheel.” RSVP was explicitly designed to support extensibility mechanisms by allowing it to carry opaque objects. This encourages RSVP extensions that create and maintain distributed state for information other than pure resource reservation. The designers believed that extensions could easily be developed to add support for explicit routes and label distribution. Extensions do not make the enhanced version of RSVP incompatible with existing RSVP implementations. An RSVP implementation can differentiate between LSP signaling and standard RSVP reservations by examining the contents of each message. With the proper extensions, RSVP provides a tool that consolidates the procedures for a number of critical signaling tasks into a single message exchange: Extended RSVP can establish an LSP along an explicit route. Extended RSVP can distribute label-binding information to LSRs in the LSP. Extended RSVP can reserve network resources in nodes comprising the LSP (the traditional role of RSVP). Note that extended RSVP permits an LSP to be established to carry best-effort traffic without making a specific resource reservation. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  39

40 RSVP A generic QoS signaling protocol An Internet control protocol
Uses IP as its network layer Originally designed for host-to-host Uses the IGP to determine paths RSVP is not A data transport protocol A routing protocol RFC 2205 The Resource Reservation Protocol (RSVP) is a generic signaling protocol that was originally designed to be used by applications to request and reserve specific Quality of Service (QoS) requirements across an internetwork. Resources are reserved hop-by-hop across the internetwork; each router receives the resource reservation request, establishes and maintains the necessary state for the data flow (if the requested resources are available), and forwards the resource reservation request to the next router along the path. As this behavior implies, RSVP is an internetwork control protocol, similar to ICMP, IGMP, and routing protocols. It does not transport application data, nor is it a routing protocol. RSVP utilizes unicast ands multicast routing protocols to discover paths through the internetwork by consulting existing routing tables. The present document describing RSVP is RFC 2205, Resource Reservation Protocol (RSVP)-- Version 1 Functional Specification. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  40

41 Basic RSVP Path Signaling
Simplex flows Ingress router initiates connection “Soft” state Path and resources are maintained dynamically Can change during the life of the RSVP session Path message sent downstream Resv message sent upstream RSVP requests resources for simplex data flows. Each reservation is made for a data flow from a specific sender to a specific receiver. While RSVP messages are exchanged between the sender and receiver, the resulting path itself is unidirectional. Although the application data flow is from the sender to the receiver, the reservation itself is receiver-initiated. The sender notifies the receiver of a pending flow and characterizes the flow, and the receiver is responsible for requesting the resources. This design choice was made to accommodate heterogeneous receiver requirements, and for multicast flows in which multiple receivers will be joining and leaving a multicast group. RSVP requests made to routers along the transit path cause each router to either reject the request for lack of resources, or establish a soft state. This is in contrast to a hard state, which is associated with virtual connections that remain established for the duration of the data transfer. Soft state means that the logical path set up by RSVP is not necessarily associated with a physical path through the internetwork. The logical path may change during its lifetime as the result of the sender changing the characterization of the traffic, causing the receiver to modify its reservation request, or the failure of a transit router. The soft state is maintained by refreshing the soft state periodically. In standard RSVP implementations, this is done by sending PATH and RESV messages across the path. JUNOS uses a “hello” protocol to maintain state and detect node failures; by default, hello messages are sent every 3 seconds. The hello protocol can detect state changes within seconds, instead of several minutes as with standard RSVP implementations. The hello protocol is fully backward-compatible with older RSVP implementations. If a neighboring node does not support hello messages, JUNOS RSVP uses standard soft-state procedures. Sender PATH Router PATH RESV Router PATH RESV Receiver RESV Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  41

42 Other RSVP Message Types
PathTear Sent to egress router ResvTear Sent to ingress router PathErr ResvErr ResvConf Path teardown messages delete the path state from nodes that receive them. The PathTear message is originated by the sender, or by a node whose path state has timed out, and always travels downstream toward the receiver. Reservation teardown messages delete the reservation state from nodes that receive them. The ResvTear message is originated by the receiver or by a node whose reservation state has timed out, and always travels upstream toward the sender. Path error messages report errors in processing Path messages, and travel upstream to the sender along the reverse route as the Path messages. PathErr messages only report errors; they do not modify the path state of any node they pass through. Reservation error messages report errors in the processing of RESV messages, and travel hop-by-hop downstream to the receiver. Like PathErr messages, ResvErr messages only report errors and do not modify the reservation state of any node they pass through. Reservation confirmation messages are sent by each node in the path to the receiver, if the receiver requests a reservation confirmation in its RESV message. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  42

43 Extended RSVP Extensions added to support establishment and maintenance of LSPs Maintained via “hello” protocol Used now for router-to-router connectivity Includes the distribution of MPLS labels Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  43

44 MPLS Extensions to RSVP
Path and Resv message objects Explicit Route Object (ERO) Label Request Object Label Object Record Route Object Session Attribute Object Tspec Object For more detail on contents of objects: daft-ietf-mpls-rsvp-lsp-tunnel-04.txt Extensions to RSVP for LSP Tunnels RSVP has been extended for support of MPLS LSPs and Traffic Engineering. These extensions are documented in the Internet draft, Extensions to RSVP for LSP Tunnels, <draft-ietf-mpls-rsvp-lsp-tunnel-04.txt>, September 1999. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  44

45 Explicit Route Object Used to specify the route RSVP Path messages take for setting up LSP Can specify loose or strict routes Loose routes rely on routing table to find destination Strict routes specify the directly-connected next router A route can have both loose and strict components The Explicit Route Object (ERO)is added to an RSVP Path message by the ingress LSR to specify an explicit route for the message, independent of conventional IP routing. The ERO is only to be used when all routers along the explicit route support RSVP and the ERO. The ERO is also only intended to be used for unicast situations. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  45

46 ERO: Strict Route Next hop must be directly connected to previous hop
Egress LSR C E F B strict; C strict; E strict; D strict; F strict; ERO A B D Ingress LSR Strict Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  46

47 ERO: Loose Route Consult the routing table at each hop to determine the best path Egress LSR C E F D loose; ERO A B D Ingress LSR Loose Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  47

48 ERO: Strict/Loose Path
Strict and loose routes can be mixed Egress LSR C E F C strict; D loose; F strict; ERO A B D Strict Ingress LSR Loose Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  48

49 Partial Explicit Route
Router X SmallNet Router Y Partial Explicit Route “Loose” hop to Router G Follow the IGP shortest path to G first .2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  49

50 Full (Strict) Explicit Route
Router X SmallNet Router Y Full (Strict) Explicit Route A–F–G–E–C–D Follow the Explicit Route .2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  50

51 Hop-by-Hop ERO Processing
If Destination Address of RSVP message belongs to your router You are the egress router End ERO processing Send RESV message along reverse path to ingress Otherwise, examine next object in ERO Consult routing table Determine physical next hop If ERO object is strict Verify next router is directly connected Forward to physical next hop Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  51

52 Label Objects Label Request Object Label Object
Added to PATH message at ingress LSR Requests that each LSR provide label to upstream LSR Label Object Carried in RESV messages along return path upstream Provides label to upstream LSR The Label Request Object can be added to the PATH message by the ingress LSR to request that intermediate routers provide a label binding for the path. The object provides an indication of the network layer protocol that is to be carried over the path, permitting non-IP network layer protocols to be send down the path. When a PATH message containing a Label Request Object arrives at an LSR, the LSR allocates a label for upstream propagation and stores it as part of the path state. When the corresponding RESV message arrives, the label is placed in its Label Object. The Label Object is carried in RESV messages. For FF and SE styles, a label is associated with each sender. The Label Object carries a label, and when an LSR receives a RESV message it uses the label as the outgoing label associated with the sender. The LSR allocates a new label, or uses the label allocated and stored in path state as a result of the Label Request Object, and places it in the Label Object of the RESV message to be sent to the previous hop. In this way, the Label Object supports the distribution of labels from downstream nodes to upstream nodes. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  52

53 Record Route Object— PATH Message
Added to PATH message by ingress LSR Adds outgoing IP address of each hop in the path In downstream direction Loop detection mechanism Sends “Routing problem, loop detected” PathErr message Drops PATH message The Record Route Object can be added to Path messages to allow the sender to receive information about the actual path the LSP traverses. Each node along the path records its IP address in the RRO, and the RRO is returned to the sender in Resv messages. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  53

54 Record Route Object — RESV Message
Added to RESV message by egress LSR Adds outgoing IP address of each hop in the path In upstream direction Loop detection mechanism Sends “Routing problem, loop detected” ResvErr message Drops RESV message The Record Route Object can be added to Path messages to allow the sender to receive information about the actual path the LSP traverses. Each node along the path records its IP address in the RRO, and the RRO is returned to the sender in Resv messages. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  54

55 Session Attribute Object
Added to PATH message by ingress router Controls LSP Priority Preemption Fast-reroute Identifies session ASCII character string for LSP name The Session Attribute Object can be added to the Path message to control LSP priority, preemption, and fast-reroute. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  55

56 Tspec Object Contains link management configuration
Requested bandwidth Minimum and maximum LSP packet size Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  56

57 Path Signaling Example
Signaling protocol sets up path from San Francisco to New York, reserving bandwidth along the way Seattle New York (Egress) PATH San Francisco (Ingress) PATH PATH Miami Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  57

58 Path Signaling Example
Once path is established, signaling protocol assigns label numbers in reverse order from New York to San Francisco Seattle New York (Egress) RESV 3 San Francisco (Ingress) 1965 RESV 1026 RESV Miami Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  58

59 Adjacency Maintenance—Hello Message
New RSVP extension Hello message Hello Request Hello Acknowledge Rapid node to node failure detection Asynchronous updates 3 second default update timer 12 second default dead timer Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  59

60 Path Maintenance— Refresh Messages
Maintains reservation of each LSP Sent every 30 seconds by default Consists of PATH and RESV messages Node to node, not end to end Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  60

61 RSVP Message Aggregation
Bundles up to 30 RSVP messages within single PDU Controls Flooding of PathTear or PathErr messages Periodic refresh messages (PATH and RESV) Enhances protocol efficiency and reliability Disabled by default Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  61

62 Traffic Engineering Constrained Routing
MPLS Welcome  62

63 Signaled vs Constrained LSPs
Common Features Signaled by RSVP MPLS labels automatically assigned Configured on ingress router only Signaled LSPs CSPF not used User configured ERO handed to RSVP for signaling RSVP consults routing table to make next hop decision Constrained LSPs CSPF used Full path computed by CSPF at ingress router Complete ERO handed to RSVP for signaling Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  63

64 Constrained Shortest Path First Algorithm
Modified “shortest path first” algorithm Finds shortest path based on IGP metric while satisfying additional constraints Integrates TED (Traffic Engineering Database) IGP topology information Available bandwidth Link color Modified by administrative constraints Maximum hop count Bandwidth Strict or loose routing Administrative groups The ingress router determines the physical path for each LSP by applying a Constrained Shortest Path First (CSPF) algorithm to the information in the TED. CSPF is a shortest-path-first algorithm that has been modified to take into account specific restrictions when calculating the shortest path across the network. Input into the CSPF algorithm includes: Topology link-state information learned from the IGP and maintained in the TED Attributes associated with the state of network resources (such as total link bandwidth, reserved link bandwidth, available link bandwidth, and link color) that are carried by IGP extensions and stored in the TED Administrative attributes required to support traffic traversing the proposed LSP (such as bandwidth requirements, maximum hop count, and administrative policy requirements) that are obtained from user configuration As CSPF considers each candidate node and link for a new LSP, it either accepts or rejects a specific path component based on resource availability or whether selecting the component violates user policy constraints. The output of the CSPF calculation is an explicit route consisting of a sequence of router addresses that provides the shortest path through the network that meets the constraints. This explicit route is then passed to the signaling component, which establishes forwarding state in the routers along the LSP. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  64

65 Computing the ERO Ingress LSR passes user defined restrictions to CSPF
Strict and loose hops Bandwidth constraints Admin Groups CSPF algorithm Factors in user defined restrictions Runs computation against the TED Determines the shortest path CSPF hands full ERO to RSVP for signaling Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  65

66 Traffic Engineering Database
Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  66

67 Traffic Engineering Database
CSPF uses TED to calculate explicit paths across the physical topology Similar to IGP link-state database Relies on extensions to IGP Network link attributes Topology information Separate from IGP database Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  67

68 TE Extensions to ISIS/OSPF
Describes traffic engineering topology Traffic engineering database (TED) Bandwidth Administrative groups Does not necessarily match regular routed topology Subset of IGP domain ISIS Extensions IP reachability TLV IS reachability TLV OSPF Extension Type 10 Opaque LSA Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  68

69 ISIS TE Extensions IP Reachability TLV IP prefixes that are reachable
IP link default metric Extended to 32 bits (wide metrics) Up/down bit Avoids loops in L1/L2 route leaking The extended IP reachability TLV is TLV type 135. The existing IP reachability TLV is a single TLV that carries IP prefixes in a format that is analogous to the IS neighbor TLV. It carries four metrics, of which only the default metric is commonly used. Of this, the default metric has a possible range of This limitation is one of the restrictions that is now lifted. In addition, route redistribution (a.k.a. route leaking) is a key problem that is not addressed by the existing IP reachability TLV. This problem occurs when an IP prefix is injected into a level one area, redistributed into level 2, subsequently redistributed into a second level one area, and then redistributed from the second level one area back into level two. This problem occurs because the path that the information can take forms a loop. The likely result is a forwarding loop. To address these issues, the proposed extended IP reachability TLV provides for a 32 bit metric and adds one bit to indicate that a prefix has been redistributed 'down' in the hierarchy. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  69

70 ISIS TE Extensions IS Reachability TLV IS neighbors that are reachable
ID of adjacent router IP addresses of interface (/32 prefix length) Sub-TLVs describe the TE topology The extended IS reachability TLV is TLV type 22. The existing IS reachability TLV is a single TLV that contains information about a series of IS neighbors. For each neighbor, there is a structure that contains the default metric, the delay, the monetary cost, the reliability, and the 7-octet ID of the adjacent neighbor. Of this information, the default metric is commonly used. The remaining three metrics (delay, monetary cost, and reliability)are not commonly implemented and reflect unused overhead in the TLV. The neighbor is identified by its system ID (typically 6-octets), plus one octet to indicate the pseudonode number if the neighbor is on a LAN interface. Thus, the existing TLV consumes 11 octets per neighbor, with 4 octets for metric and 7 octets for neighbor identification. To indicate multiple adjacencies, this structure is repeated within the IS reachability TLV. Because the TLV is limited to 255 octets of content, a single TLV can describe up to 23 neighbors. The IS reachability TLV can be repeated within the LSP fragments to describe further neighbors. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  70

71 ISIS IS Reachability TLV
Sub-TLVs contain Local interface IP address Remote interface IP address Maximum link bandwidth Maximum reservable link bandwidth Reservable link bandwidth Traffic engineering metric Administrative group Reserved TLVs for future expansion Please refer to Internet Draft, IS-IS extensions for Traffic Engineering, <draft-ietf-isis-traffic-01.txt>, May 1999 for explanations on each sub-TLV. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  71

72 OSPF TE Extensions Opaque LSA Traffic Engineering LSA
Original Router LSA not extensible Type 10 LSA Area flooding scope Standard LSA header (20 bytes) TE capabilities Traffic Engineering LSA Work in progress This extension makes use of the Opaque LSA. Three types of Opaque LSAs exist, each of which has different flooding scope. This proposal uses only Type 10 LSAs, which have area flooding scope. One new LSA is defined, the Traffic Engineering Opaque LSA. This LSA describes routers, point-to-point links, and connections to multiaccess networks (similar to a Router LSA). For traffic engineering purposes, the existing Network LSA suffices for describing multiaccess links, so no additional LSA is defined for this purpose. Please refer to Internet Draft, Traffic Engineering Extensions to OSPF, <draft-katz-yeung-ospf-traffic-01.txt>, April 1999 for more information on the OSPF extensions. “work in progress”, meaning ISIS MPLS-TE development is ahead of OSPF Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  72

73 Configuring Constraints— LSP 1 with 40 Mbps
Router X SmallNet Router Y Configuring Constraints— LSP 1 with 40 Mbps Follows the IGP shortest path to D since sufficient bandwidth available .2 LSP1: 40 Mbps /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  73

74 Configuring Constraints— LSP 2 with 70 Mbps
Router X SmallNet Router Y Configuring Constraints— LSP 2 with 70 Mbps Insufficient bandwidth available on IGP shortest path .2 LSP1: 40 Mbps /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E LSP2: 70 Mbps .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  74

75 Affinity (Link Colors)
Ability to assign a color to each link Gold Silver Bronze Up to 32 colors available Can define an affinity relationship Include Exclude JUNOS divides traffic engineering into its component parts: Packet forwarding Information distribution Path selection Path signaling Each component is an individual module. The JUNOS traffic engineering architecture provides clean interfaces between each of the four functional components. This combination of modularity and clean interfaces provides flexibility, allowing you to change individual components as needed. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  75

76 Configuring Constraints— LSP 3 with 50 Mbps
Router X SmallNet Router Y Configuring Constraints— LSP 3 with 50 Mbps Exlcude all Bronze links .2 LSP1: 40 Mbps /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 Bronze .1 .2 .1 .1 Router D .1 LSP3: 20 Mbps Exclude Bronze Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E Bronze LSP2: 70 Mbps .1 /30 /30 10.0.8/30 .2 .2 .1 Bronze Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  76

77 Preemption Defines relative importance of LSPs on same ingress router
CSPF uses priority to optimize paths Higher priority LSPs Are established first Offer more optimal path selection May tear down lower priority LSPs when rerouting Default configuration makes all LSPs equal Attempts to establish an LSP might fail because of a lack of bandwidth. For some LSPs, it might be the case that the network should tear down other low-priority LSPs in favor of the one being established. The new LSP is said to preempt existing LSPs. Preemption does not occur if the new LSP’s priority is equal to or less than existing LSPs or if preempting existing LSPs still does not produce sufficient bandwidth to support new LSP. In other words, preemption happens only if it permits the successful setup of a new, higher priority LSP. Setup priority also serves to define the relative importance of LSPs on the same ingress router. During software startup, new connection establishment, or fault recovery, setup priority is used to determine the order in which LSPs are serviced. High-priority LSPs tend to be established first and offer more optimal path selection. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  77

78 Preemption Controlled by two settings Use with caution
Setup priority and hold (reservation) priority New LSP compares its setup priority with hold priority of existing LSP If setup priority is less than hold priority, existing LSP is rerouted to make room Priorities from 0 (strong) through 7 (weak) Defaults Setup priority is 7 (do not preempt) Reservation priority is 0 (do not allow preemption) Use with caution No large scale experience with this feature You control preemption with the priority statement. This statement has two options, the setup priority (setup-priority) and the reservation priority (reservation-priority). Setup priority—used when the LSP is being set up to determine if other LSPs should be preempted by the new LSP. Reservation priority—used for keeping the reservation after successful setup. A high setup priority means the LSP tends to be preemptive. A high reservation priority means the LSP tend not to be preemptable. An LSP is not allowed to have a strong setup priority and a weak reservation priority. Permanent preemption loops might result if two LSPs are allowed to preempt each other. You must configure the reservation priority to be “stronger” than or equal to the setup priority. There are eight levels of priority, ranging from 0 (the highest) through 7 (the lowest). The number of priority levels is an architectural constant and is not configurable. The default behavior is for an LSP to have the lowest setup priority, 7 (that is, it is non-preemptive), and to have the highest reservation priority, 0 (that is, it is non-preemptable), so that preemption does not happen. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  78

79 LSP Reoptimization Reroutes LSPs that would benefit from improvements in the network Special rules apply Disabled by default in JUNOS Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  79

80 LSP Reoptimization Rules
Reoptimize if new path can be found that meets all of the following Has lower IGP metric Has fewer hops Does not cause preemption Reduces congestion by 10% Compares aggregate available bandwidth of new and old path Intentionally conservative rules, use with care Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  80

81 LSP Load Balancing Two categories Selecting path for each LSP
Multiple equal cost IP paths to egress are available Random Least-fill Most-fill Balance traffic over multiple LSP Multiple equal cost LSPs to egress are available BGP can load balance prefixes over 8 LSPs Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  81

82 LSP Load Balancing Selecting path for each LSP Random is default
Distributes LSPs randomly over available equal cost paths Least-fill Distributes LSPs over available equal cost paths based on available link bandwidth Most-fill LSPs fill one link first, then next Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  82

83 Selecting paths for each LSP
Router X SmallNet Router Y Selecting paths for each LSP Most fill, Least fill, Random Configure 12 LSPs, each with 10 Mbps .2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 20 .2 .1 .1 20 Router D .1 20 30 30 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E 20 .1 /30 /30 20 30 10.0.8/30 .2 .2 .1 20 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  83

84 Load Balancing Balancing traffic over multiple LSPs
Up to 16 equal cost paths for BGP JUNOS default is per-prefix Per-packet (per-flow) knob available Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  84

85 Balancing traffic over equal cost IGP paths
Router X SmallNet Router Y Balancing traffic over equal cost IGP paths Without LSPs configured, prefixes are distributed over equal cost IGP paths .2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 20 .2 .1 .1 20 Router D .1 20 30 30 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E 20 .1 /30 /30 20 30 10.0.8/30 .2 .2 .1 20 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  85

86 Balancing traffic over equal cost LSPs
Router X SmallNet Router Y Balancing traffic over equal cost LSPs Same behavior, now over LSPs Prefixes distributed over multiple LSPs .2 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 20 .2 .1 .1 20 Router D .1 20 30 30 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E 20 20 .1 /30 /30 30 10.0.8/30 .2 .2 .1 20 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  86

87 Advanced Traffic Engineering Features
MPLS Welcome  87

88 Traffic Protection MPLS Welcome  88

89 Traffic Protection Primary LSP Secondary LSPs Fast Reroute
Retry timer Retry limit Secondary LSPs Standby option Fast Reroute Adaptive mode Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  89

90 Primary LSP Optional If no primary configured Zero or one primary path
If configured, becomes preferred path for LSP If no primary configured LSR makes all decisions to reach egress Zero or one primary path Revertive capability Revertive behavior can be modified Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  90

91 Primary LSP Revertive Capability Retry timer Retry limit
Time between attempts to bring up failed primary path Default is 30 seconds Primary must be stable two times (2x) retry timer before reverts back Retry limit Number of attempts to bring up failed primary path Default is 0 (unlimited retries) If limit reached, human intervention then required Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  91

92 Secondary LSP Optional Zero or more secondary paths
All secondary paths are equal Selection based on listed order of configuration Standby knob Maintains secondary path in ‘up’ condition Eliminates call-setup delay of secondary LSP Additional state information must be maintained Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  92

93 Secondary Paths— LSP 1, exclude Bronze
Router X SmallNet Router Y Secondary Paths— LSP 1, exclude Bronze Secondary – avoid primary if possible .2 20 10 10 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 Bronze .1 .2 Gold .1 .1 Router D .1 Gold LSP1: 20 Mbps Exclude Bronze Router A 10.0.0/30 10.0.2/30 10.0.2/30 30 .1 30 .2 .2 .2 Bronze Router E Gold Secondary: 0 Mbps .1 /30 /30 20 20 30 10.0.8/30 .2 .2 .1 Bronze Router F Router G .1 /30 .2 20 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  93

94 Adaptive Mode Applies to Avoids double counting SE Reservation style
LSP rerouting Primary & secondary sharing links Avoids double counting SE Reservation style When an LSP is adaptive, the LSP holds onto existing resources until the new path is successfully established and traffic has been cut over to the new LSP. To retain its resources, an adaptive LSP does the following: Maintain existing paths and allocated bandwidths--This ensures that the existing path is not torn down prematurely and allows the current traffic to continue flowing while the new path is being set up. Avoid double counting for links that share the new and old paths--Double-counting occurs when an intermediate router does not recognize that the new and old paths belong to the same LSP and counts them as two separate LSPs, requiring separate bandwidth allocations. If some links are close to saturation, double-counting might cause the setup of the new path to fail. [ edit protocols mpls label-switched-path lsp-path-name ] adaptive; Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  94

95 Shared Links FF reservation style: SE Reservation style: Shared link
B E Shared link Ingress LSR A C D C Egress LSR F E Session 1 Session 2 The Fixed Filter style is commonly used for applications where traffic from each sender is likely to be concurrent and independent. Each of the individual senders is identified by an IP address and an internal identification number—a TCP or UDP port number or, in the case of MPLS, an LSP ID. When used with MPLS, the FF style allows the establishment of multiple, parallel, unicast, point-to-point LSPs to support load balancing. If the LSPs share a common link, the total amount of reserved bandwidth for the shared link is the sum of the reservations for the individual senders. Sessions using the Shared Explicit style share the bandwidth of the largest request across a shared link. When used with MPLS, the SE style is critical for supporting the ability to reroute an established LSP. This is because on shared links, if reservations are counted twice, the router’s Admission Control function could reject the new LSP due to a lack of resources. The SE reservation style permits the old and new LSPs to share resources over shared links. By default, JUNOS software uses the FF style. The SE style can be configured with the adaptive keyword under the LSP configuration. FF reservation style: Each session has its own identity Each session has its own bandwidth reservation SE Reservation style: Each session has its own identity Sessions share a single bandwidth reservation Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  95

96 Secondary Paths— LSP 1, exclude Bronze
Router X SmallNet Router Y Secondary Paths— LSP 1, exclude Bronze Secondary – in Standby mode, 20M exclude Gold .2 20 10 10 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 Bronze .1 .2 Gold .1 Router D .1 .1 Gold LSP1: 20 Mbps Exclude Bronze Router A 10.0.0/30 10.0.2/30 30 .1 30 .2 .2 .2 Bronze Router E Gold Secondary: 20 Mbps Exclude Gold .1 /30 /30 20 20 30 10.0.8/30 .2 .2 .1 Bronze Router F Router G .1 /30 .2 20 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  96

97 Fast Reroute Configured on ingress router only
Detours around node or link failure ~100s of ms reroute time Detour paths immediately available Crank-back to node, not ingress router Uses TED to calculate detour Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  97

98 Fast Reroute Short term solution to reduce packet loss
If node or link fails, upstream node Immediately detours Signals failure to ingress LSR Only ingress LSR knows policy constraints Ingress computes alternate route Based on configured secondary paths Initiates long term reroute solution Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  98

99 Fast Reroute Example Primary LSP from A to E F E A D B C
[edit protocols] show mpls mpls { label-switched-path to-E { to ; primary use-C; secondary use-F; } path use-C { loose; secondary use-F { loose; A D B C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  99

100 Fast Reroute Example Enable fast reroute on ingress
A creates detour around B B creates detour around C C creates detour around D F E [edit protocols] show mpls mpls { label-switched-path to-E { to ; primary use-C; secondary use-F; fast-reroute; } path use-C { loose; path use-F { loose; A D B C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  100

101 Fast Reroute Example - Short Term Solution
B to C link fails B immediately detours around C B signals to A that failure occurred F E A D B C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  101

102 Fast Reroute Example – Long Term Solution
A calculates and signals new primary path F E A D B C Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  102

103 LSP Rerouting Initiated by ingress LSR Conditions that trigger reroute
Exception is fast reroute Conditions that trigger reroute More optimal route becomes available Failure of a resource along the LSP path Preemption occurs Manual configuration change Make before break (if adaptive) Establish new LSP with SE style Transfer traffic to new LSP Tear down old LSP Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  103

104 Advanced Route Resolution
MPLS Welcome  104

105 Mapping Transit Traffic
Mapping transit destinations JUNOS default mode Only BGP prefixes are bound to LSPs Only BGP can use LSPs for its recursive route calculations Only BGP prefixes that have the LSP destination address as the BGP next-hop are resolvable through the LSP Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  105

106 Route Resolution– Transit Traffic Example
Router X SmallNet Router Y Route Resolution– Transit Traffic Example I-BGP E-BGP /16 .2 /16 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E Configure a “next hop self” policy on Router D .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  106

107 What if BGP next hop does not align with LSP endpoint?
Router X SmallNet Router Y What if BGP next hop does not align with LSP endpoint? I-BGP E-BGP /16 .2 /16 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Traffic Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E IGP Passive interface .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  107

108 Traffic Engineering Shortcuts
Configure TE Shortcuts on ingress router Good for BGP nexthops that are not resolvable directly through an LSP If LSP exists that gets you closer to BGP nexthop Installs prefixes that are downstream from egress router into ingress router’s inet.3 route table Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  108

109 BGP next hops beyond the egress router can use the LSP!
Router X SmallNet Router Y BGP next hops beyond the egress router can use the LSP! I-BGP E-BGP /16 .2 /16 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Traffic Router E BGP Next hop is down stream from LSP endpoint .1 /30 /30 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  109

110 TE Shortcuts By itself, still only usable by BGP
Installs additional prefixes in ingress router’s inet.3 table Only BGP can use routes in inet.3 for BGP recursive lookups Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  110

111 But, cannot use the LSP for traffic destined to web servers
Router X SmallNet Router Y But, cannot use the LSP for traffic destined to web servers I-BGP E-BGP /16 .2 /16 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Web Traffic Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E /24 Webserver Farm Transit Traffic .1 /30 /30 part of IGP domain 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  111

112 BGP-IGP knob Traffic-engineering bgp-igp knob
Forces all MPLS prefixes into main routing table (inet.0) All destinations can now use all LSPs IGP and BGP prefixes Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  112

113 Now all traffic destined to egress router and beyond use LSP
Router X SmallNet Router Y Now all traffic destined to egress router and beyond use LSP I-BGP E-BGP /16 .2 /16 /30 10.0.1/30 .2 Router B .2 .1 Router C .1 /30 /30 .1 .2 .1 .1 Router D .1 Router A 10.0.0/30 10.0.2/30 .1 .2 .2 .2 Router E /24 All Traffic Webserver Farm .1 /30 /30 part of IGP domain 10.0.8/30 .2 .2 .1 Router F Router G .1 /30 .2 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  113

114 TTL Decrement Default is to decrement TTL on all LSR hops
Loop prevention Topology discovery via traceroute Disable TTL decrement inside LSP No topology discovery TTL decrement at egress router only [edit protocols mpls label-switched-path lsp-path-name] set no-decrement-ttl By default, the TTL field value in the packet header is decremented by 1 for every hop the packet traverses in the LSP, thereby preventing loops. If the TTL field value reaches 0, packets are dropped and an ICMP error packet might be sent to the originating router. You might want to disable normal TTL decrementing in an LSP so that the TTL field value would not reach 0 before the packet reaches its destination, keeping the packet from being dropped. You might also want to disable normal TTL decrementing to make the MPLS cloud appear as a single hop, thereby hiding the network topology. Once set, the TTL field of IP packets entering LSPs are decremented by only 1 upon exiting the LSP, making the LSP appear as a one-hop router to diagnostic tools, such as traceroute. The ingress router pushes a label on IP packets to create MPLS packets, and the TTL field in the label is initialized to 255. The label's TTL field value is decremented by 1 for every hop the MPLS packet traverses in the LSP. On the penultimate hop of the LSP, the router pops the label, but does not write the label's TTL field value to the IP packet's TTL field. Instead, when the IP packet reaches the egress router, the IP packet's TTL field value is decremented by 1. When you use traceroute to diagnose problems with an LSP, traceroute sees the ingress router, although the egress router performs the TTL decrement.This feature works only if all the routers in your network are Juniper Networks routers. LSPs with no-decrement-ttl configured will not come up if another vendor's router is detected in the network. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  114

115 Circuit Cross Connect MPLS Welcome  115

116 Circuit Cross-Connect (CCC)
Transparent connection between two Layer 2 circuits Supports PPP, Cisco HDLC, Frame Relay, ATM, MPLS Router looks only as far as Layer 2 circuit ID Any protocol can be carried in packet payload Only “like” interfaces can be connected (for example, Frame Relay to Frame Relay, or ATM to ATM) Three types of cross-connects Layer 2 switching MPLS tunneling Stitching MPLS LSPs Circuit cross-connect (CCC) allows you to configure transparent connections between two circuits, where a circuit can be a Frame Relay DLCI, an ATM VC, a PPP interface, a Cisco HDLC interface, or an MPLS label-switched path (LSP). Using CCC, packets from the source circuit are delivered to the destination circuit with, at most, the Layer 2 address being changed. No other processing—such as header checksums, TTL decrementing, or protocol processing is done. CCC circuits fall into two categories, logical interfaces, which include DLCIs, VCs, and PPP and Cisco HDLC interfaces; and LSPs. The two circuit categories provide three types of cross-connect: Layer 2 switching—Cross-connects between logical interfaces provide what is essentially Layer 2 switching. The interfaces that you connect must be of the same type. MPLS tunneling—Cross-connects between interfaces and LSPs allow you to connect two distant interface circuits of the same type by creating MPLS tunnels that use LSPs as the conduit. LSP stitching—Cross-connects between LSPs provide a way to “stitch” together two label-switched paths, including paths that fall in two different TED areas. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  116

117 CCC Layer 2 Switching M40 A DLCI 600 DLCI 601 B A and B have Frame Relay connections to M40, carrying any type of traffic M40 behaves as switch Layer 2 packets forwarded transparently from A to B without regard to content; only DLCI is changed CCC supports switching between PPP, Cisco HDLC, Frame Relay PVCs, or ATM PVCs ATM AAL5 packets are reassembled before sending CCC Layer 2 switching is very similar to that of a traditional Layer 2 switch, which maps DLCIs, or ATM VCs. You configure switched connections manually; no dynamic signaling (such as ATM’s PNNI) is supported. No Layer 3 processing or lookup is done, which means that any Layer 3 protocol can be carried as payload. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  117

118 CCC Layer 2 Switching M40 A B [edit protocols] user@host# show
DLCI 600 DLCI 601 B so-5/1/0.600 so-2/2/1.601 [edit protocols] show connections { interface-switch connection-name { interface so-5/1/0.600; interface so-2/2/1.601; } For Frame Relay circuits, you must also specify the encapsulation when configuring the DLCI. For each DLCI, you configure whether it is a circuit or a regular logical interface. The DLCI for regular interfaces must be in the range 1 through 511. For CCC interfaces, it must be in the range 512 through 1022. [edit interfaces] show so-5/1/0 { unit 600 { point-to-point; # Default interface type encapsulation frame-relay-ccc; dlci 600; } Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  118

119 CCC MPLS Interface Tunneling
ATM access network IP backbone ATM access network M40 M20 A B ATM VC 514 MPLS LSP ATM VC 590 Transports packets from one interface through an MPLS LSP to a remote interface Bridges Layer 2 packets from end-to-end Supports tunneling between “like” ATM, Frame Relay, PPP, and Cisco HDLC connections CCC allows you to connect two ATM, FR, PPP, or Cisco HDLC access links via an MPLS tunnel. ATM packets are reassembled on input, then encapsulated in an MPLS header and transmitted down the LSP. At the egress router, the MPLS header is stripped, the packet is divided into cells which are then transmitted through an ATM PVC. The MPLS tunnel is transparent to both ATM networks. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  119

120 CCC MPLS Interface Tunneling
ATM access network IP backbone ATM access network M40 M20 A B ATM VC 514 MPLS LSP1 ATM VC 590 MPLS LSP2 at-7/1/1.514 at-3/0/1.590 [edit protocols] show connections { remote-interface-switch m40-to-m20 interface at-7/1/1.514; transmit-lsp lsp1; receive-lsp lsp2; } [edit protocols] show connections { remote-interface-switch m20-to-m40 interface at-3/0/1.590; transmit-lsp lsp2; receive-lsp lsp1; } When traffic from Router A (VC 514) reaches the M40, it is encapsulated and placed into an LSP, which is sent through the backbone to the M20. At the M20, the label is removed and the packets are placed onto the ATM PVC (VC 590) and sent to Router B. To configure LSP tunnel cross-connects, you must also configure the CCC encapsulation on the ingress and egress routers (M40 and M20). [edit interfaces] show at-7/1/1 { atm-options { vpi 1 maximum-vcs 1024; } unit 514 { point-to-point; # Default interface type encapsulation atm-ccc-vc-mux; vci 1.514; Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  120

121 CCC LSP Stitching TE domain 2 TE domain 1 TE domain 3
LSR TE domain 2 LSR TE domain 1 LSR LSR LSR TE domain 3 LSP stitching To improve scalability, very large networks running MPLS can be divided into multiple traffic engineering domains. For example, the network can be partitioned into a number of IS-IS areas, which do not have complete information of the entire network. Because the traffic engineering database is constructed from information carried in IS-IS extensions, the traffic engineering domain cannot extend beyond an IS-IS area. CCC enables you to set up an LSP that extends beyond the limits of a traffic engineering domain by stitching LSPs from multiple domains together. You can also stitch together LSPs when one LSP extends into a separate BGP domain that might belong to another ISP. LSR Large networks can be separated into several traffic engineering domains (supports IS-IS area partitioning) CCC allows establishment of LSP across domains by “stitching” together LSPs from separate domains Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  121

122 CCC LSP Stitching TE domain 1 TE domain 2 LSP stitching
[edit protocols] show connections { lsp-switch LSR-A_to_LSR-E { transmit-lsp lsp2; receive-lsp lsp1; } lsp-switch LSR-E_to_LSR-A { receive-lsp lsp3; transmit-lsp lsp4; LSR-E TE domain 1 LSR-D LSR-B LSR-C TE domain 2 LSP stitching LSR-A LSP stitching cross-connects "stitch" together LSPs to join two LSPs, for example, when the LSPs fall in two different TED areas. Without LSP stitching, a packet travelling from LSR-A to LSR-E is encapsulated on LSR-A (the ingress router for the first LSP), decapsulated on LSR-B (the egress router), and then re-encapsulated on LSR-B (the ingress router for the second LSP). With LSP stitching, you connect LSP1 and LSP2 into a single, stitched LSP, which means that the packet is encapsulated once (on LSR-A) and decapsulated once (on LSR-E). You can use LSP stitching to create a seamless LSP for LSPs carrying any kind of traffic. Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  122

123 Traffice Engineering with MPLS–APRICOT 2000–17 April 2017 Copyright © 2000, Juniper Networks, Inc. MPLS Welcome  123


Download ppt "Traffic Engineering with MPLS"

Similar presentations


Ads by Google