Presentation is loading. Please wait.

Presentation is loading. Please wait.

Juniper Metafabric Westcon 5 daagse Washid Lootfun

Similar presentations


Presentation on theme: "Juniper Metafabric Westcon 5 daagse Washid Lootfun"— Presentation transcript:

1 Juniper Metafabric Westcon 5 daagse Washid Lootfun
Sr. Pre-Sales Engineer February, 2014

2 Meta-Fabric ARCHITECTURE PILLARS
Easy to deploy & use Mix- and match deployment One OS Universal buidling block for any network architecture Seamless 1GE  10GE  40GE  100GE upgrades Simple Maximize flexibility Open Standards-based interfaces L2,L3 MPLS Open SDN protocol support, VxLAN, OVSDB, OpenFlow IT Automation via Open Interfaces; Vmware, Puppet, Checf, Python JUNOS Scripting & SDK Standard Optics Open Save time,improve performance Elastic (Scale-out) Fabrics Qfabric Virtual Chassis Virtual Chassis Fabric Smart The MetaFabric architecture includes three pillars that enable the network to innovate at the speed of applications. This enables better troubleshooting, capacity planning, network optimization, and security threat mitigation, PLUS it helps accelerate the adoption of cloud, mobility and Big Data applications. The three pillars are: Simple: Network and security that’s simple to acquire, deploy, use, integrate and scale. From simplification in our devices and network architectures to operational simplicity with automation and orchestration, we continue to build simple solutions, especially as they extend into physical and virtual environments. This ultimately results in better network performance, reliability, and reduced operating costs. Open: An approach that extends across our devices, how we work with technology partners in the ecosystem, and also open communities that give our customers the flexibility they need to integrate with any heterogeneous data center environment, support any application, any policy, any SDN protocol—and do so without disruption or fear of vendor lock in. Smart: Using network intelligence and analytics to drive insight and turn raw data into knowledge that can be acted upon. Customers benefit from a flexible and adaptable data center network. Junos Space Network Director enables users to focus on their respective roles: Build, Deploy, Monitor, Troubleshoot, Report. This saves time through better troubleshooting, capacity planning, network optimization, and security threat mitigation, AND it helps accelerate the adoption of cloud, mobility and Big Data applications.

3 MetaFabric ARCHITECTURE portfolio
Flexible building blocks; simple switching fabrics Switching Re-hash of categories in the MetaFabric architecture portfolio. Universal data center gateways Routing Smart automation and orchestration tools Management Simple and flexible SDN capabilities SDN Adaptive security to counter data center threats Data Center Security Reference architectures and professional services Solutions & Services

4 EX switches

5 EX SERIES PRODUCT FAMILY
MODULAR EX6210 EX8208 EX8216 EX9204 EX9208 EX9214 Dense Access/ Aggregation Switch Core/ Aggregation Switch Programmable Core/Distribution Switch AGGREGATION/ CORE One JUNOS Network Director What you are looking at is our portfolio and as you can see, whether you need a 12-port access or a multiple-port access, now we have a complete solution. FIXED EX2200 EX2200-C EX3300 EX4200 EX4300 EX4550 Entry Level Access Switches Proven Access Switch Versatile Access Switch Powerful Aggregation Switch ACCESS

6 EX4300 Series switches Product Description Notable Features AFI AFO
24/48x 10/100/1000 TX access ports 4x 1/10G (SFP/SFP+) uplink ports 4x 40G (QSFP+) VC / uplink ports PoE / PoE+ options Redundant / Field Replaceable components (power supplies, fans, uplinks) DC power options Notable Features L2 and basic L3 (static, RIP) included OSPF, PIM available with enhanced license BGP, ISIS available with advanced license Virtual Chassis 10 members Gbps VC backplane 12 hardware queues per port Front to Back & Back to front airflow options Target Applications Campus data closets Top of Rack data center / High Performance 1G server attach applications Small Network Cores AFI AFO SKU # Ports PoE/PoE+ Ports PoE power budget EX P 24 550 W EX T - EX P 48 900 W EX T EX T-AFI EX T-DC EX T-DC-AFI 6

7 Introducing The EX9200 Ethernet Switch Available March 2013
Native programmability (Junos image) Automation toolkit Programmable Control/Management planes and SDK (SDN, OpenFlow, etc.) 1M MAC addresses 256K IPv4 and 256K IPv6 routes 32K VLANs (bridge domains) EX9208 EX9204 L2, L3 switching MPLS & VPLS /EVPN* ISSU Junos Node Unifier 4, 8 & 14 slots; 240G/slot 40x1GbE, 32x10GbE, 4x40GbE & 2x100GbE Powered by Juniper One Custom Silicon Juniper One Custom Silicon  Roadmap

8 EX9200 Line Cards 1GbE Line Cards 10GbE Line Card 40GbE Line Card
40 x 10/100/1000BASE-T 40 x 100FX/1000BASE-X SFP EX F/40T 10GbE Line Card 32 x 10GbE SFP+ Up to 240G throughput EX XS 40GbE Line Card 4 x 40GE QSFP+ Up to 120G throughput EX9200-4QS 100GbE Line Card 2 x 100G CFP + 8 x 10GbE SFP+ Up to 240G throughput EX9200-2C-8XS

9 EX9200 Flexibility Virtual Chassis
High Availability Redundant RE, switch fabric Redundant power /cooling Performance and Scale Modular configuration High-capacity backplane Easy to Manage Single image, single config One management IP address Single Control Plane Single protocol peering Single RT/FT Virtual Chassis–A Notch Up Scale ports/services beyond one chassis Physical placement flexibility Redundancy beyond one chassis One management and control plane Management Require Dual RE’s Per Chassis Access Switch Access Switch

10 Collapsed Distribution & Core
ON ENTERPRISE SWITCHING ARCHITECTURES Network Director Multi-Tier Collapsed Distribution & Core Distributed Access Core Distribution Access Problem: Existing architectures lack scale, flexibility and are operationally complex Solution: Collapse Core and Distribution, Virtual chassis at Access layer Solution: Virtual chassis at Access layer across wiring closets Solution: Virtual chassis at both Access and Distribution layers Benefit: Simplification through Consolidation, Scale, Aggregation, Performance Benefit: Management Simplification, Reduced Opex Benefit: Flexibility to expand and grow, Scale, Simplification

11 VIRTUAL CHASSIS DEPLOYMENT ON ENTERPRISE Span Horizontal or Vertical
Connect Wiring Closets Collapse a Vertical Building BUILDING A BUILDING B CLOSET 1 EXSeries Virtual Chassis EX4300VC-3a EX6200-1b 10GbE/40GbE uplinks WLC Cluster WLA WLA WLA WLA EX4300VC-2a LAG WLA Centralized DHCP and other services EX3300VC-1a 10/40GbE 40G VCP App Servers CLOSET 2 WLA WLA WLA WLA LAG LAG SRX Series Cluster EX4300 Aggregation/ Core Internet Access LAG EX4550VC-1a EX9200VC-1b

12 Private MPLS Campus Core with VPLS or L3VPN
DEPLOYING MPLS AND VPN ON ENTERPRISE— METRO/DISTRIBUTED CAMPUS Stretch the Connectivity for a Seamless Network Benefits: High Availability STP free networks Layer 2 VLAN extension Robust routing and MPLS VPNs Private MPLS Campus Core with VPLS or L3VPN Core Switch (PE) Core Switch (PE) Core Switch (PE) Core Switch (PE) MPLS MPLS VLAN VLAN Access Switche (CE) Access Switche (CE) Access Switche (CE) Access Switche (CE) Core Switch (PE) Core Switch (PE) MPLS VLAN Wireless Access Point Wireless Access Point Wireless Access Point Wireless Access Point Access Switches (CE) Access Switches (CE) SITE 1 SITE 3 Wireless Access Point VLAN1 VLAN2 R&D VPN Marketing/ Sales VPN Finance/ Business Ops VPN Wireless Access Point VLAN3 SITE 2

13 JUNIPER ETHERNET SWITCHING
Simple Reliable Secure #3 market share in 2 years 20,000+ switching customers Enterprise & Service Providers 23+ Million ports deployed

14 Hold or transition slide
QFX5100 Platform

15 Next Generation Top of rack switches
QFX5100 Series Next Generation Top of rack switches Multiple 10GbE/40GbE port count options Supports multiple data center switching architectures New Innovations: Topology-Independent In-Service Software Upgrades Analytics MPLS GRE tunneling Rich L2/L3 features including MPLS Low Latency SDN ready

16 QFX5100 next generation Tor
QFX S QFX S QFX Q To address these challenges, Juniper is introducing the QFX5100 family of nimble, high-performance, low-latency and feature-rich Layer 2 and Layer 3 switches are optimized for Fibre Channel over Ethernet environments. The QFX5100 is Juniper’s strategic ToR family – supporting multiple 10GbE/40GbE port count options and acting as a universal building block for multiple data center switching architectures including Juniper’s: Mixed 1/10/40GbE Virtual Chassis architecture A Virtual Chassis Fabric architecture, and Enhancing the performance of our QFabric architecture The QFX5100 also supports open architectures such as Spine and Leaf and Layer 3 fabrics. In addition to Virtual Chassis Fabric, new innovations enabled by the QFX5100 include: Topology-Independent In-Service Software Upgrades that enables hitless data center operations, and Insight Analytics Software Module which captures and reports microburst events that exceed defined thresholds QFX5100 switches also include support for virtualized network environments including Juniper Contrail and VMware NSX Layer 2 gateway services. Three QFX5100 switch models are available: QFX S 10GbE switch offering 48 dual-mode, small form-factor pluggable transceiver (SFP/SFP+) ports and six quad small form-factor pluggable plus (QSFP+) 40GbE ports supports up to a max of 72 x 10GbE ports QFX S 10GbE switch providing 96 dual-mode, small form-factor pluggable transceiver (SFP/SFP+) ports and eight quad small form-factor pluggable plus (QSFP+) 40GbE ports supports up to a max of 104 x 10GbE ports QFX Q 40GbE switch featuring 24 quad small form-factor pluggable plus (QSFP+) ports and two expansion slots that can accommodate a four-port QSFP+ expansion module [click] 48 x 1/10GbE SFP+ 6 x 40GbE QSFP uplinks 1.44 Tbps throughput 1U fixed form factor 96 x 1/10GbE SFP+ 8 x 40GbE QSFP uplinks 2.56 Tbps throughput 2U fixed form factor 24 x 40GbE QSFP 8 x 40GbE expansion slots 2.56 Tbps throughput 1U fixed form factor Low latency │ Rich L2/L3 feature set │ Optimized FCoE

17 color coded, hot-swappable
Q4CY2013 QFX s Front side (port side) view 48 x 1/10GbE SFP+ interfaces 6 x 40GbE QSFP interfaces Mgmt0 (RJ45) Mgmt1 (SFP) Console USB 4+1 redundancy fan tray, color coded (orange: AFO, blue: AFI), Hot-swappable 1+1 redundancy 650W PS color coded, hot-swappable Each 40GbE QSFP interface can be converted to 4 x 10GbE interfaces without reboot Maximum 72 x 10GbE interfaces, 720Gbps CLI to change port speed: set chassis fpc <fpc-slot> pic <pic-slot> port <port-number> channel-speed 10G set chassis fpc <fpc-slot> pic <pic-slot> port-range <low> <high> channel-speed 10G

18 QFX5100-96s Q1CY2014 Supports two port configuration modes:
Front side (port side) view 96 x 1/10GbE SFP+ interfaces 8 x 40GbE QSFP interfaces Supports two port configuration modes: 96 x 10GbE SFP plus 8 x 40GbE interfaces 104 x 10GbE interfaces 1.28Tbps (2.56Tbps full duplex) switching performance New 850W 1+1 redundant color-coded hot-swappable power supplies 2+1 redundant color-coded hot-swappable fan tray

19 QFX5100-24Q Q1CY2014 (Same FRU side configuration as QFX5100-24S
Front side (port side) view (Same FRU side configuration as QFX S 24 x 40GbE QSFP interfaces Two hot-swappable 4x40GbE QSFP modules Port configuration has 4 modes, mode change requires reboot Default (Fully Subscribed mode): Doesn’t support QIC Maximum 24x40GbE interfaces or 96x10GbE interfaces; line rate performance for all packet sizes 104-port mode Only first 4x40GbE QIC are supported with last 2 40GbE interfaces disabled; first 2 QSFPs work as 8x10GbE 2nd QIC slot cannot be used; no native 40GbE support. All base ports can be changed to 4x10GbE ports (24x4=96), so total is 104x10GbE interfaces 4x40GbE PIC mode All base ports can be channelized Only 4x40GbE QIC is supported; works in both QIC slots but can’t be channelized. 32X40GbE or 96X10GbE + 8X40GbE Flexi PIC mode Support all QICs but QIC can’t be channelized Only base port 4-24 can be channelized. Also supports 32x40GbE configuration

20 Advanced JUNOS SOFTWARE ARCHITECTURE
Provides the foundation for advanced functions ISSU (In-Service Software Upgrade). ENABLE HITLESS UPGRADE Other Juniper applications for additional service in a single switch Third-party application Can bring up the system much faster Linux Kernel (Centos) Host NW Bridge KVM JunOS VM (Active) (Standby) 3rd Party Application Juniper Apps

21 QFX5100 Hitless operations Dramatically Reduces Maintenance Windows
b H t s Simple QFX5100 Hitless operations Dramatically Reduces Maintenance Windows With Topology-Independent In-Service Software Upgrades (ISSU), the QFX5100 can dramatically reduces network maintenance windows. The QFX5100 is the only product in its class to offer true- topology independent ISSU. The typical approach to ISSU for ToR / access switches is to rely on the resilient backup switch in the network to provide service continuity while a switch is being upgraded and rebooted. This results in: Network performance degradation for the during of the switch upgrade as one element of the resilient pair is out of service during the upgrade process Network resiliency risk as resiliency is compromised during the upgrade process Long maintenance windows and operations inefficiencies as only one switch can be updated at a time, requiring a sequential upgrade process. With topology-independent ISSU, there is no dependency on a resilient backup switch for hitless software upgrades. During software upgrades there is no impact on network performance and no risk to network resiliency as all switches continue to operate during the upgrade process. Also, there is no need to plan for long maintenance windows as all switches can be upgraded simultaneously. [click] The QFX5100 is built on an x86 processor running a hardened Linux Kernal. The Junos operating system runs in a Kernal Based Virtual Machine. To upgrade the operating system, an upgrade command is issued from the master Junos VM (Master). A new Junos disk image is created and verified, then launched as a new backup Junos VM. Then the system waits for the Packet Forwarding Engines to synchronize, before swapping roles (detaching devices from the current master and attaching devices to backup). Once the upgrade is complete the current master Junos VM is shut down after delivering a truly seamless software upgrade. QFX5100 Topology- Independent ISSU High-Level QFX5100 Architecture Junos VM (Master) Junos VM (Master) Junos VM (Master) Junos VM (Backup) Network Performance PFE PFE Kernal Based Virtual Machines Competitive ISSU Approaches Linux Kernel x86 Hardware Broadcom Trident II Broadcom Trident II Network Resiliency Benefits: Seamless Upgrade No Traffic Loss No Performance impact No resilient risk No port flap Data Center Efficiency During Switch Software Upgrade

22 Introducing VCF architecture
Spines – Integrated L2/L3 switches Connects leafs , Core, WAN and services Leafs - Integrated L2/L3 gateways Connects to Virtual and bare metal servers Local switching Any to Any connections Single Switch to Manage Services GW Spine Switches Any to Any connections Leaf switches O VM vSwitch Virtual Server Bare Metal

23 Plug-n-Play Fabric New leafs are auto-provisioned
Services GW WAN/Core New leafs are auto-provisioned Auto configuration and image Sync Any non-factory default node is treated as network device O VM vSwitch Virtual Server Bare Metal

24 virtual chassis fabric Deployment option
EX9200 QFX Q Virtual Chassis Fabric (VCF) – 10G/40G QFX S QFX3500 EX4300 10G access Existing 10G access Existing 1G access

25 QFX5100 – Software Features
Planned FRS Features* L2: xSTP, VLAN, LAG, LLDP/MED L3: Static routing, RIP, OSPF, IS-IS, BGP, vrf-lite, GRE Multipath: MC-LAG, L3 ECMP IPv6: Neighbor Discovery, Router advertisement, static routing, OSPFv3, BGPv6, IS-ISv6, VRRPv3, ACLs MPLS, L3VPN, 6PE Multicast: IGMPv2/v3, IGMP snooping/querier, PIM-Bidir, ASM, SSM, Anycast, MSDP QoS: Classification, Cos/DSCP rewrite, WRED, SP/WRR, ingress/egress policing, dynamic buffer allocation, FCoE/Lossless flow, DCBx, ETS. PFC, ECN Security: DAI, PACL, VACL, RACL, storm control, Control Plane Protection 10G/40G FCoE, FIP snooping Micro-burst Monitoring, analytic Sflow, SNMP Python Planned Post-FRS Features Virtual Chassis – Mixed mode 10 Member Virtual Chassis: Mix of QFX5100, QFX3500/QFX3600, EX4300 Virtual Chassis Fabric: 20 nodes at FRS with mix of QFX5100, QFX3500/QFX3600, and EX4300 Virtual Chassis features: Parity with standalone HA: NSR, NSB, GR for routing protocols, GRES ISSU on standalone QFX5100 and all QFX5100 Virtual Chassis, Virtual Chassis Fabric NSSU in mixed mode of Virtual Chassis or Virtual Chassis Fabric 64-way ECMP VXLAN gateway* OpenStack, Cloudstack integration* * After Q1 time frame *Please refer to release notes and manual for latest information

26 Virtual Chassis Fabric
QFX5100 New Virtual Chassis Fabric Up to 20 members Virtual Chassis Improved Up to 10 members As I mentioned, the QFX5100’s flexibility is evidenced by it being the universal building block in multiple data center switching architectures. In Juniper architectures, where all elements are managed as a single switch, the QFX5100 can be used in a: Virtual Chassis (architecture). In conjunction with any combination of EX4300, QFX3500 and QFX3600 switches to deliver a mixed 1/10GbE architecture. In a Virtual Chassis architecture up to 10 members (switches) are managed as single switch. NOTE 1: EX4200, EX4500 and EX4550 switches can not be used as part of a QFX Series-based Virtual Chassis. NOTE 2: Unlike EX4500-based Virtual Chassis, there are no dedicated Virtual Chassis ports on the QFX5100. Standard 10GbE or 40GbE ports are used for interconnecting Virtual Chassis members. [click] QFabric (architecture). The QFX5100 can be deployed as a QFabric Node in conjunction with QFX3500 and QFX3600 switches. In a QFabric architecture, up to 128 members (Nodes) can be managed as a single switch supporting over 6,000 ports of server connectivity. Using the QFX5100 as a QFabric Node: Increases L3 host routes and multicast routes 8x Reduces node latency from 900 ns to 550 ns Doubles the 10GbE port densities and number of L2 routes NOTE: Only the QFX S can be used as a QFabric Node. As mentioned, the genesis Virtual Chassis Fabric is based upon a combination of the best of Virtual Chassis and the best of QFabric. Virtual Chassis is optimized for 1GbE environments Virtual Chassis Virtual Chassis is deployed in a ring topology (but mesh topologies are now also supported with the introduction of QFX5100) Separate Virtual Chassis clusters are required across network tiers, for example you require separate Virtual Chassis clusters in the access tier and aggregation tier of a data center network Virtual Chassis has a 10 member limit All members are switches and are managed as a single switch QFabric QFabric is a flat fabric topology, single network tier QFabric has a 128 member limit QFabric is optimized for 10GbE and 40GbE environments QFabric is comprised of switches (nodes), interconnects and directors, all of which are managed as a single switch Virtual Chassis Fabric - takes the best of Virtual Chassis and QFabric Virtual Chassis Fabric has a 20 member limit, supporting up to GbE ports of server connectivity. Virtual Chassis Fabric is optimized for mixed 1GbE / 10GbE / 40GbE environments, supporting EX4300, QFX3500 and QFX3600 in addition to QFX5100 switches. Virtual Chassis Fabric single network tier with a fabric topology In a Virtual Chassis Fabric, all members are switches and are managed as a single switch Virtual Chassis Fabric (architecture). Virtual Chassis Fabric is a new Juniper switching architecture optimized for mixed 1/10/40GbE environments which combines the benefits of Virtual Chassis (optimized for 1GbE environments) with QFabric (optimized for 10GbE environments). Virtual Chassis Fabric expands on existing Virtual Chassis capabilities by adding support for spine-leaf switching architectures, which are ideal for high performance, low latency data center deployments. Virtual Chassis Fabric allows up to 20 members (mix of QFX5100, QFX3500, QFX3600 and EX4300) to be managed as a single switch. NOTE 1: A minimum of 2, maximum of 4 “spines” are supported in a Virtual Chassis Fabric architecture. Spines must be QFX5100. NOTE 2: A maximum of 18 “leafs” are supported in a Virtual Chassis Fabric architecture. Spines can be EX4300, QFX3500, QFX3600 or QFX5100. 4) Spine-Leaf. The QFX5100 can also be used in open architectures, such as Spine-Leaf, where the QFX5100 can be used as either a spine or a leaf, or 5) Layer 3 Fabrics. Layer 3 fabrics for very large scale data center fabrics. QFabric Improved Up to 128 members Managed as a Single Switch Spine-Leaf Layer 3 Fabric L3 Fabric QFX5100

27 VCF Overview Flexible Simple Available Automated Up to 768 ports
1,10,40G 2-4 spines 10 and 40G spine L2 , L3 and MPLS Simple Single device to manage Predictable performance Integrated RE Integrated control plane …. Available 4 x Integrated RE GRES/NSR/NSB ISSU/NSSU Any-to-Any connectivity 4 way multi-path Automated Plug-n-Play Analytics for traffic monitoring Network Director

28 * * *In planning 10GbE/40GbE Fixed Switches ROADMAP PReview 3T 2013
QFX5100 Hardware 48xSFP+ EX4550 Hardware 2x40GbE module QFX5100 Software Features L2 and L3 unicast/Muliticast L2 and L3 IPV6 L2 and L3 QoS L2 and L3 ACLs MC-LAG, L3 ECMP FCoE transit ZTP QFX Software Features Virtual Chassis EX4550 Software Features MACsec GRE Virtual Chassis on 40GbE ports 4x10 breakout on 40GbE ports 3T 2013 QFX5100 Hardware 24xQSFP+ 24xSFP+ 96xSFP+ QFX5100 Software Features 10 Member Virtual Chassis: Mix of QFX5100, QFX, EX4300 V20: Mix of QFX5100, QFX, EX4300 ISSU on QFX5100 Standalone ISSU on QFX5100 VC and V20 NSSU in a mixed VC and V20 MACsec on QFX S 64 way ECMP 1T 2014 * QFX5100 Hardware 48x10GT QFX5100 Software Features VxLAN Gateway PVLAN QinQ ERSPAN 802.3ah, 802.1ag OpenFlow 1.3 Puppet 2T 2014 * *In planning

29 CDBU Switching roadmap summary
Future Hardware EX4300 EX9200 2x100G LC QFX5100 (24QSFP+) QFX GBASE-T Opus PTP EX GBASE-T QFX5100 (48SFP+) EX9200 6x40GbE LC QFX5100 (24SFP+) EX9200 MACsec EX GbE Module QFX5100 (96SFP+) EX GbE per slot EX4300 Fiber Software AnalyticsD Virtual Chassis w/ QFX Series V20 VXLAN Gateway Opus QFX3000-M/G 10GBASE-T Node ND 1.5 ISSU on Opus VXLAN Routing EX9200 QFX3000-M/G QinQ, MVRP QFX3000-M/G L3 Multicast 40GbE ND 2.0 OpenFlow 1.3 QFX3000-M/G QFX5100 (48 SFP+) Node Solutions DC 1.0 Virtualized IT DC Campus 1 .0 DC 1.1 ITaaS & VDI DC 2.0 IaaS /w Overlay

30 Hold or transition slide
MX Series

31 (VM Mobility Traffic Optimizer) (Overlay Replication Engine)
SDN and the MX Series Delivering innovation inside and outside of the data center Unique new WAN and SDN capabilities on MX Series Routers: Juniper’s MX Series routers have added support to continue its leadership position as the most efficient and flexible platform for Data Center Interconnect. Adding to an already robust list of DCI protocols, EVPN maximizes performance and improves the user experience by creating the most efficient and open forwarding paths across the WAN. In addition, as our customers begin to leverage and adopt SDN, the high performance MX with its programmable TRIO silicon delivers an ideal Layer 2 and Layer 3 data center interconnect gateway and is the industry’s only router that connects multi-vendor SDN assets. There are four key things we are introducing today on the MX Series: - MX Series Universal SDN Gateway (USG): The most advanced and flexible SDN routing and bridging converter for inter, intra and cross overlay communication. Only Juniper has it. Enables connections from one SDN domain to another SDN domain of unlike types. This simplifies the migration to SDN by enabling you to transition your data centers one rack, one pod, and/or one data center at a time - MX Series ORE (Overlay Replication Engine): A hardware services replicator for Broadcast, Unknown-Unicast and Multicast in SDN Enabled data centers. Only Juniper's MX Series, powered by Trio Silicon, eliminates the need to deploy virtual appliances to do BUM traffic replication which would complicate SDN deployment and reduce throughput. - MX Series Ethernet VPN (EVPN): Next-gen MPLS encapsulation to allow for Layer 2 stretch with Active/Active and MAC learning via control plane. Cisco recommends using proprietary protocols like OTV & LISP for DCI. EVPN enables seamless and open VM mobility across subnets, in the data center or over the WAN, to support disaster recovery or cloud bursting. - MX Series VM traffic optimizer (VMTO): Builds upon EVPN capabilities to create the most efficient networks paths for mobile workloads. Enables path changes automatically when a VM moves across to a different location. (Background: VMTO virtualizes the Default Gateway for each VLAN onto every MX Series router. We can use the MAC information to send specific route updates into our IGP. This is similar to what we use in QFabric to distribute the DG to every ToR switch and share MAC info across the fabric.) No additional license is required for any of these features. Supported on all MX Series platforms. Flexible SDN enabled silicon to provide seamless workload mobility and connections between private and public cloud infrastructures The most advanced and flexible SDN bridging and routing gateway USG (Universal SDN Gateway) Next-generation technology for connecting multiple data centers and providing seamless workload mobility EVPN (Ethernet VPN) VMTO (VM Mobility Traffic Optimizer) Creating the most efficient network paths for mobile workloads ORE (Overlay Replication Engine) A hardware-based, high-performance services engine for broadcast and multicast replication within SDN overlays

32 VXLAN PART OF UNIVERSAL GATEWAY FUNCTION ON MX
1H 2014 L3VPN VPLS, EVPN - High scale multi-tenancy VTEP tunnels per tenant P2P, P2MP tunnels - Tie to full L2, L3 functions on MX Unicast, multicast forwarding IPv4, IPv6 L2: Bridge-domain, virtual-switch - Gateway between LAN, WAN and Overlay Ties all media together Giving migration options to the DC operator IRB.N IRB.1 IRB.0 Bridge-Domain.N VLAN-ID: N LAN interface #N LAN interface #K VTEP #N VNID N Tenant #N, virtual DC #N Bridge-Domain.1 VLAN-ID: 1002 LAN interface #3 LAN interface #4 VTEP #1 VNID 1 Tenant #1, virtual DC #1 Bridge-Domain.0 VLAN-ID: 1001 LAN interface #1 LAN interface #2 VTEP #0 VNID 0 Tenant #0: virtual DC #0 DC GW

33 (Universal SDN Gateway)
NETWORK devices IN The Data Center USG (Universal SDN Gateway) Bare Metal Servers Databases HPC Legacy Apps Non x86 IP Storage Virtualized Servers ESX ESXi HyperV KVM ZEN Firewalls Load Balancers NAT Intrusion Detection VPN Concentrator L4 – 7 Appliances NSX ESXi NSX KVM SC HyperV Contrail KVM Contrail ZEN SDN Servers

34 USG (UNIVERSAL SDN GATEWAY)
Introducing four new options for SDN enablement Provide SDN-to-non-SDN translation, same IP subnet SDN to IP (Layer 2) Layer2 USG Provide SDN-to-non-SDN translation, different IP subnet SDN to IP (Layer 3) Layer3 USG Provide SDN-to-SDN translation, same or different IP subnet, same or different overlay SDN to SDN SDN USG Provide SDN-to-WAN translation, same or different IP subnet, same or different encapsulation SDN to WAN Remote Data Center Branch Offices Internet WAN USG

35 VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
USGs inside the Data center USG (Universal SDN Gateway) DATA CENTER 1 VxLAN VxLAN Native IP L2 Native IP L2 Layer2 USG VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 VxLAN VxLAN Native IP L2 SDN Pod 1 VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L2 Native IP L2 Native IP L2 Native IP L2 Legacy Pods Layer3 USG Using Layer 2 USGs to bridge between devices that reside within the same IP subnet: Bare metal servers like high-performance databases, non-x86 compute, IP storage, non-SDN VMs Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways. Native IP L2 Native IP L2 Native IP SDN USG Native IP L2 L2 Native IP L2 Native L4 – 7 Services WAN USG

36 VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
USGs inside the Data center USG (Universal SDN Gateway) DATA CENTER 1 VxLAN VxLAN Native IP L3 Native IP L3 Layer2 USG VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 VxLAN VxLAN Native IP L3 SDN Pod 1 VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Legacy Pods Layer3 USG Using Layer 3 USGs to route between devices that reside within different IP subnets: Bare metal servers like high-performance databases, non-x86 compute, IP storage, non-SDN VMs Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways. Native IP L3 Native IP L3 Native IP SDN USG Native IP L3 L3 Native IP L3 Native L4 – 7 Services WAN USG

37 USGs inside the Data center
(Universal SDN Gateway) DATA CENTER 1 VxLAN VxLAN VxLAN Layer2 USG VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN MPLSover SDN Pod 1 VxLAN VxLAN VxLAN VxLAN VxLAN GRE MPLSoverGRE MPLSoverGRE MPLSoverGRE MP Layer3 USG VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN Using SDN USGs to communicate between islands of SDN: NSX to NSX – Risk, scale, change control, administration NSX to Contrail – Multi-vendor, migrations LSoverGRE MPLSoverGRE MPLS SDN USG VxLAN MPLSover NSX SDN Pod 2 Contrail SDN Pod 1 WAN USG

38 USGs for remote connectivity
(Universal SDN Gateway) DATA CENTER 1 Internet VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Layer2 USG VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 GRE GRE GRE GRE SDN Pod 1 GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE EVPN EVPN Layer3 USG Branch Offices EVPN EVPN EVPN EV Using SDN USGs to communicate to resources outside the local data center: Data Center Interconnect – SDN to [VPLS, EVPN, L3VPN] Branch Offices – SDN to [GRE, IPSec] Internet – SDN to IP (Layer 3) NSX SDN Pod 2 PN EVPN EVPN EVPN EVPN SDN USG VxLAN VxLAN VxLAN EVPN EVPN EVPN EVPN VxLAN VxLAN VxLAN VxLAN VxLAN DATA CENTER 2 WAN USG

39 Universal gateway solutions
USG (Universal SDN Gateway) DATA CENTER 1 Native IP L2 Native IP L2 VxLAN VxLAN Native IP L3 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L3 Layer2 USG VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 VxLAN VxLAN VxLAN VxLAN MPLSover MPLSoverGRE MPLSoverGRE MPLSoverGRE VxLAN VxLAN Native IP L2 SDN Pod 1 VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L3 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Legacy Pods GRE GRE EVPN EVPN Layer3 USG Native IP L3 LSoverGRE MPLSoverGRE MPLS SDN Pod 2 EVPN EVPN EVPN VxLAN VxLAN VxLAN VxLAN VxLAN Native IP L2 Native IP L2 Native IP GRE GRE GRE Native IP L3 Native IP L3 Native IP Internet Native IP L3 Native IP VxLAN GRE GRE VxLAN MPLSover SDN USG Native IP L2 GRE GRE GRE GRE GRE GRE GRE VxLAN VxLAN Native IP L3 L2 Native IP L2 Native VxLAN VxLAN VxLAN L3 Native IP L3 Native NSX SDN Pod 2 Contrail SDN Pod 1 L4–7 Services BRANCH OFFICES DATA CENTER 2 WAN USG

40 USG Comparisons USG Layer 2 USG Layer 3 SDN WAN Description QFX5100
(Universal SDN Gateway) Description QFX5100 MX Series/EX9200 Layer 2 USG Provide SDN-to-non-SDN translation, same IP subnet NSX or Contrail talk Layer 2 to non-SDN VMs, bare metal and L4-7 services Use Cases Layer 3 Provide SDN-to-non-SDN translation, different IP subnet NSX or Contrail talk Layer 3 to non-SDN VMs, bare metal and L4-7 services and Internet SDN Provide SDN-to-SDN translation, same or different IP subnet, same or different Overlay NSX or Contrail talk to other PODs of NSX or Contrail WAN Provide SDN-to-WAN translation, same or different IP subnet NSX or Contrail talk to other remote locations – branch, DCI X86 Appliance Competing ToRs Competing Chassis

41 Next-generation technology for connecting multiple data centers and providing seamless workload mobility EVPN (Ethernet VPN)

42 PRIVATE MPLS WAN without EVPN
(Ethernet VPN) Pre-evpn: Layer 2 stretch between Data centers Without EVPN Data Plane Only one path can be active at a given time Remaining links are put into standby mode Control Plane Layer 2 MAC tables are populated via the data plane (similar to a traditional L2 switch) Results in flooding of packets across WAN due to out of sync MAC tables MAC VLAN Interfaces AA 10 xe-1/0/0.10 Router 1’s MAC Table MAC VLAN Interfaces BB 10 xe-1/0/0.10 Router 2’s MAC Table DATA CENTER 1 DATA CENTER 2 PRIVATE MPLS WAN without EVPN ge-1/0/0.10 ge-1/0/0.10 xe-1/0/0.10 xe-1/0/0.10 Server 1 Server 2 MAC: AA xe-1/0/0.10 xe-1/0/0.10 MAC: BB VLAN 10 ge-1/0/0.10 ge-1/0/0.10 VLAN 10

43 PRIVATE MPLS WAN without EVPN
(Ethernet VPN) Post-evpn: Layer 2 stretch between Data centers With EVPN Data Plane All paths are active Inter-data center traffic is load-balanced across all WAN links Control Plane Layer 2 MAC tables are populated via the control plane (similar to QFabric) Eliminates flooding by maintaining MAC table synchronization between all EVPN nodes MAC VLAN Interfaces AA 10 xe-1/0/0.10 BB ge-1/0/0.10 Router 1’s MAC Table MAC VLAN Interfaces BB 10 xe-1/0/0.10 AA ge-1/0/0.10 Router 2’s MAC Table DATA CENTER 1 DATA CENTER 2 PRIVATE MPLS WAN without EVPN ge-1/0/0.10 ge-1/0/0.10 xe-1/0/0.10 xe-1/0/0.10 Server 1 Server 2 MAC: AA xe-1/0/0.10 xe-1/0/0.10 MAC: BB VLAN 10 ge-1/0/0.10 ge-1/0/0.10 VLAN 10

44 (VM Mobility Traffic Optimizer)
VMTO (VM Mobility Traffic Optimizer) Creating the most efficient network paths for mobile workloads

45 Scenario with VMTO enabled
(VM Mobility Traffic Optimizer) The need for L2 location awareness Scenario without VMTO Scenario with VMTO enabled PRIVATE MPLS WAN VLAN 10 DC1 DC2 PRIVATE MPLS WAN VLAN 10 VLAN 10 DC1 DC2

46 Without VMTO: Egress Trombone Effect
(VM Mobility Traffic Optimizer) /24 Server 1 VLAN 20 DC 1 PRIVATE MPLS WAN Task: Server 3 in Data Center 3 needs to send packets to Server 1 in Data Center 1. Standby VRRP DG: Active VRRP DG: Standby VRRP DG: Standby VRRP DG: Problem: Server 3’s active Default Gateway for VLAN 10 is in Data Center 2. DC 2 DC 3 VLAN 10 VLAN 10 Effect: Traffic must travel via Layer 2 from Data Center 3 to Data Center 2 to reach VLAN 10’s active Default Gateway. The packet must reach the Default Gateway in order to be routed towards Data Center 1. This results in duplicate traffic on WAN links and suboptimal routing – hence the “Egress Trombone Effect.” Server 2 Server 3 /24 /24

47 With VMTO: No Egress Trombone Effect
(VM Mobility Traffic Optimizer) /24 Server 1 VLAN 20 DC 1 PRIVATE MPLS WAN Task: Server 3 in Datacenter 3 needs to send packets to Server 1 in Datacenter 1. Active IRB DG: Active IRB DG: Active IRB DG: Active IRB DG: Solution: Virtualize and distribute the Default Gateway so it is active on every router that participates in the VLAN. DC 2 DC 3 VLAN 10 VLAN 10 Effect: Egress packets can be sent to any router on VLAN 10 allowing the routing to be done in the local datacenter. This eliminates the “Egress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic. Server 2 Server 3 /24 /24

48 DC 1’s Edge Router Table Without VMTO
Without VMTO: ingress Trombone Effect VMTO (VM Mobility Traffic Optimizer) /24 Server 1 Route Mask Cost Next Hop 24 5 Datacenter 2 10 Datacenter 3 VLAN 20 DC 1 DC 1’s Edge Router Table Without VMTO PRIVATE MPLS WAN /24 Cost 10 /24 Cost 5 Task: Server 1 in Datacenter 1 needs to send packets to Server 3 in Datacenter 3. Problem: Datacenter 1’s edge router prefers the path to Datacenter 2 for the /24 subnet. It has no knowledge of individual host IPs. DC 2 DC 3 VLAN 10 VLAN 10 Effect: Traffic from Server 1 is first routed across the WAN to Datacenter 2 due to a lower cost route for the /24 subnet. Then the edge router in Datacenter 2 will send the packet via Layer 2 to Datacenter 3. Server 2 Server 3 /24 /24

49 DC 1’s Edge Router Table WITH VMTO
With VMTO: No ingress Trombone Effect VMTO (VM Mobility Traffic Optimizer) /24 Server 1 Route Mask Cost Next Hop 24 5 Datacenter 2 10 Datacenter 3 32 VLAN 20 DC 1 PRIVATE MPLS WAN DC 1’s Edge Router Table WITH VMTO /32 Cost 5 /32 Cost 5 /24 Cost 10 /24 Cost 5 Task: Server 1 in Datacenter 1 needs to send packets to Server 3 in Datacenter 3. Solution: In addition to sending a summary route of /24 the datacenter edge routers also send host routes which represent the location of local servers. DC 2 DC 3 VLAN 10 VLAN 10 Effect: Ingress traffic destined for Server 3 is sent directly across the WAN from Datacenter 1 to Datacenter 3. This eliminates the “Ingress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic. Server 2 Server 3 /24 /24

50 (Overlay Replication Engine)
BUM Traffic ORE (Overlay Replication Engine) Broadcast Layer 2 packets that must be flooded to all devices in a broadcast domain Unknown Unicast Layer 2 packets which haven’t been learned by the switch and therefore must be flooded to all devices within the broadcast domain Multicast Layer 2 packets that must be flooded to more than one device within the broadcast domain

51 (Overlay Replication Engine)
BUM Replication without ORE ORE (Overlay Replication Engine) Server needs to send a BUM packet (e.g. ARP, DHCP) A unicast packet is sent to a x86 Virtual Machine dedicated for BUM Replication VLAN 10 Sub-Optimal Method: This becomes an exponential burden that doesn’t scale, is subject to performance degradation and is unreliable method for doing broadcast and multicast replication. x86 Virtual Machine converts the packet into a standard Multicast or Broadcast packet and forwards it to all intended receivers

52 (Overlay Replication Engine)
BUM Replication with ORE ORE (Overlay Replication Engine) Server needs to send a BUM packet (e.g. ARP, DHCP) A unicast packet is sent to the ORE on the MX Series. VLAN 10 Optimal Method: The optimal place to perform this replication is in purpose built hardware. Juniper’s programmable silicon enables this functionality and provides much greater scale and performance. The MX Series converts the packet into a standard multicast or broadcast packet and forwards it to all intended receivers.

53 Single Pane of Glass for Wired and Wireless Networks
JUNOS SPACE NETWORK DIRECTOR Wired and Wireless Visualization Network Director Visualize analyze control Complete Wired & Wireless View Flow Monitoring Real-Time Performance Monitoring Single Pane of Glass for Wired and Wireless Networks

54 Network director Smart network management from a single pane of glass
Junos Space Network Director enables smart, comprehensive and automated network management through a single pane of glass. Network Director enables network administrators to visualize, analyze and control their entire data center: physical and virtual, single and multiple sites. Three key elements: Visualize: Complete visualization of virtual and physical network along with graphical Virtual Machine tracing Analyze: Performance Analyzer – Provides real-time and trended monitoring of VMs, users, ports. VM Analyzer – Real time physical and virtual topology view with vMotion activity tracking. Fabric Analyzer – Monitor and analyze the health of any Juniper fabric system. Control: Automating provisioning w/ Zero touch provisioning – Simplifies network deployment without user intervention resulting in reduced configuration errors due to human error. Bulk provisioning – Accelerates application delivery while protecting against configuration errors with profile based pre-validated configuration. Orchestration via Network Director APIs – Open RESTful APIs that provides complete service abstraction (not just device or network) of the network while integrating with third-party orchestration tools (like openstack and cloudstack) for accelerated service delivery. VMWare vCenter Integration – Physical virtual network orchestration based on vMotion activity. Visualize Physical and virtual visualization API Network Director Analyze Smart and proactive networks Control Lifecycle and workflow automation Physical Networks Virtual Networks

55 IP fabric (underlay network)
Contrail SDN controller Overlay Architecture In the proactive overlay architecture, everything is orchestrated by an orchestration system. Every time any resource – compute, storage or now network needs to be provisioned, that is done through the orchestration system. This orchestration system is a key enabler for building private/hybrid clouds and understands the management of what is needed to place the workloads. It is responsible for creating them and moving them as workloads needs change dynamically. The orchestration system only has a very abstract idea of the network, it does not understand all of the network paths that are needed to stitch together these complicated cloud apps. That is the job of the controller. It must translate the high level messages from the orchestration systems on where the VM needs to be placed to a lower level understanding of all the network paths that are necessary to support the applications. Hence there are several important things the controller needs to do: Configuration – it needs to be able to translate the high level message from orchestration systems to a lower level message to program the flows in the virtual switches or routers Analytics – needs to have analytics to get results back on what is happening in the software overlay Controller talks to the vswitches/vrouters that is running on VM on compute nodes and it also talks to physical switches through a gateway function,to support the physcial infrastructure is not virtualized. It can federate and talk to other controllers across clouds. Orchestrator SDN CONTROLLER Control REST Horizontally scalable Highly available Federated SDN Controller JunosV Contrail Controller Configuration Analytics Control BGP Federation BGP Clustering BGP + Netconf XMPP XMPP Virtualized Server Virtualized Server Tenant VMs IP fabric (underlay network) VM VM VM VM VM VM KVM Hypervisor + JunosV Contrail vRouter/Agent (L2 & L3) MPLS over GRE or VXLAN Juniper Qfabric/QFX/EX or 3rd party underlay switches Juniper MX or 3rd party gateway routers

56 IP fabric (underlay network)
Contrail + MX = Better Together How Contrail Creates Synergies with MX A gateway router is required in any cloud deployment here is a little bit more detail on the mx as a gateway element. As you can see in the slides, a gateway element is required in an virtualization or cloud deployment. A gateway is essentially the bridge between the physical and virtual worlds, and is responsible for routing traffic coming in and out of the cloud (on the physical network) into the virtual world. Many solutions accomplish this with a separate piece of software, but because contrail leverages common routing protocols like bgp and mpls over gre, it can talk directly to physical routers like the mx series, eliminating the need for a separate gateway translation device. Again this is a much more scalable and clean approach to solving the problem, and it’s been a key differentiator for juniper—especially in accounts that are already existing mx customers. Orchestrator REST SDN CONTROLLER Control Horizontally scalable Highly available Federated Contrail speaks common protocols that an MX understands, making integration simple & allowing controller to speak to physical elements Future development will increase integration Using MX as gateway reduces need for Software gateway & additional ports/servers SDN Controller Configuration Analytics Control BGP Federation BGP Clustering XMPP BGP & Netconf XMPP Virtualized Server Virtualized Server VM VM VM IP fabric (underlay network) VM VM VM MPLSoGRE or VXLAN

57 MetaFabric ARCHITECTURE: what will it enable?
Wrapping up, it’s been five years getting to this point. Today, we’ve taken the next leap forward with the MetaFabric architecture. This is just the start. We’re going to continue investing in our technology, our partnerships and our products to help our customers solve their business problems. Simplifying their SDN transition, delivering greater analytics and insights across their clouds and data centers, and building smarter networks for the greatest possible time to value. Simple Open Smart VM Accelerated time to value and increased value over time

58 Hold or transition slide
Thank you


Download ppt "Juniper Metafabric Westcon 5 daagse Washid Lootfun"

Similar presentations


Ads by Google