Download presentation
Presentation is loading. Please wait.
1
Juniper SP Products Update
Ivan Lysogor 4th September 2015
2
Successful business requires Velocity - Agility - Continuity
MARKET & CHALLENGES € Margin pressure Long Lead Times Operational Complexity Successful business requires Velocity - Agility - Continuity
3
FULL PORTFOLIO COVERAGE
CPE vMX ACCESS ACX PRE-AGGREGATION ACX AGGREGATION MX CORE NETWORK PTX EPC MX PTX ACX MX MOBILE NorthStar Controller HQ MX MX Metro Contrail Controller PTX PTX PTX ACX MX MOBILE MOBILE SDN GW PTX ACX MX MX MX BRANCH BRANCH Metro Metro HOME HOME Solution requirements Solution cost Port cost Maintenance cost Upgrade cost Reliability New service rollout speed MOBILE PTX ACX MX MX PTX ACX MX MX MOBILE HQ PTX PTX Backbone HQ HOME Metro Metro HOME
4
MX Product Update
5
MX Portfolio Overview One TRIO Architecture One UNIVERSAL EDGE MX 2020
9600 Gbps One TRIO Architecture One UNIVERSAL EDGE MX 2010 4800 Gbps MX 960 2860 Gbps MX 480 1560 Gbps MX 240 520 Gbps MX 104 80 Gbps N x 10Gbps VMX * Current full duplex capacity is shown
6
Applications and Scale
MPC5E 24 x 10GE or 6 x 40GE MPC5E Description 240 Gbps line card with flexible 10GE/40GE interface configuration options, increased scale and OTN support MIC0/1 XM Interface Features Interface combinations: 24 x 10GE (MIC0 and MIC1) 6 x 40GE (MIC2 and MIC3) 12 x 10GE and 3 x 40GE (MIC0 and MIC3) 12 x 10GE and 3 x 40GE (MIC1 and MIC2) Port queues with optional 32K queues upgrade license 1M queues option MIC2 XL XQ QSFP QSFP MIC3 QSFP XM QSFP QSFP QSFP Applications and Scale Up to 10M IP Routes (in hardware) Full scale L3VPN and VPLS Increased Inline IPFIX Scale
7
Applications and Scale
MPC5E 2 x 100 GE and 4 x 10GE MPC5E Description MIC1 240 Gbps line card with providing 100GE and 10GE connectivity, increased scale and OTN support CFP2 MIC3 XM Interface Features CFP2 Interfaces: 4 x 10GE SFP+ 2 x 100GE CFP2 Port queues with optional 32K queues upgrade license 1M queues option MIC0 XL XQ MIC2 XM Applications and Scale Up to 10M IP Routes (in hardware) Full scale L3VPN and VPLS Increased Inline IPFIX Scale
8
Applications and Scale
MPC6E Overview MPC6E Description MIC0 480 (520) Gbps modular line card for MX2K platform, increased scale and performance Technically not possible to be supported 4x10GE MIC 48x1GE RJ-45 MIC XM XL Interface Features XM Interface cards supported: 2x100G CFP2 w/ OTN (OTU4) 4x100G CXP 24x10G SFP+ 24x10GSFP+ w/ OTN (OTU2) Port based queueing Limited scale per-vlan queueing MIC1 XM XL XM Applications and Scale Up to 10M IP Routes Full scale L3VPN and VPLS Increased Inline IPFIX Scale
9
Routing system upgrade scenario
Highlights Protects investments into hardware (SFBs) Reduces software qualification efforts No JUNOS upgrade required, driver installation is in service With Continuity Support Driver package installed May upgrade to new fabric or may use the same 8 x SFB 800 Gbps per slot New higher density MPCs added MPC6 installed, 480Gbps per slot Happy network engineers working on something else (probably SDN-related) Future Router 9
10
MX Data Center Gateway EVPN VXLAN
Internet DC Gateway DC Fabric EVPN VXLAN 13.2R1 First Implementation (MPLS encaps) 14.1R2 VM Mobility Support 14.1R4 Active / Active Support 14.1R4 VXLAN encapsulation 14.1R4 VMWare NSX Integration
11
Virtual Machine Mobility
EVPN Advantages Link Efficiency All Active forwarding with built-in L2 Loop Prevention Convergence Leading high availability, convergence, fast reroute capabilities L3 and L2 L2 & L3 Layers Tie-In Built-in the protocol Optimal Routing Ingress and Egress VM Mobility Optimizations DC Fabric DC Gateway MPLS / IP DC Fabric DC Gateway Virtual Machine Mobility
12
PTX Update
13
PTX Series Routers PTX 1000: Distributed Converged Supercore Power
Performance 2.88Tb Distributed Core Router Flexible 288x10GbE, 72x40GbE, 24x100GbE Combining Full IP with Express MPLS Powered by ExpressPlus Deployability Industries Only Fixed Core Router Only 2RU and 19” Rack Mountable OS + SDN JUNOS: 15 Years of Routing Innovations SDN: 25 Years Perfecting IP/MPLS-TE Traffic Optimization Algorithms *Courting Bits In & Bits Out.
14
PTX Series Routers PTX Product Family of Routers: Technical Specifications
PTX1K PTX3K PTX5K Slot Capacity at FRS Fixed 2.88Tbps per system 1T/Slot 3T/Slot System Capacity 2.88 Tbps ( 288x10GE) 8T (80x100GE) 24T (240x100GE) Typical ~1.35KW ~6kW 13kW* Maximum 1.65 KW ~7.2kW 18kW* Height 2 RU 22RU 36RU Depth 31” 270mm 33” No. of FPCs/PICs N/A 8/8 8/16 Type of FPCs supported SFF-FPC FPC1/FPC2/FPC3 100GE Density 24 80 240 10GE Density 288 768 1536 Timing 2HCY15
15
vMX introduction
16
Each option has its own strength, and it is created with different focus
Physical vs. Virtual Physical Virtual High throughput, high density Flexibility to reach higher scale in control plane and service plane Guarantee of SLA Agile, quick to start Low power consumption per throughput Low power consumption per control plan and service Scale up Scale out Higher entry cost in $ and longer time to deploy Lower entry cost in $ and shorter time to deploy Distributed or centralized model Optimal in centralized cloud-centric deployment Well development network mgmt system, OSS/BSS Same platform mgmt as Physical, plus same VM mgmt as a SW on server in the cloud Variety of network interfaces for flexibility Cloud centric, Ethernet-only Excellent price per throughput ratio Ability to apply “pay as you grow” model
17
VMX overview Efficient separation of control and data-plane VFP VCP
Data packets are switched within vTRIO Multi-threaded SMP implementation allows core elasticity Only control packets forwarded to JUNOS Feature parity with JUNOS (CLI, interface model, service configuration) NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0) Guest OS (Linux) Guest OS (JUNOS) Hypervisor x86 Hardware CHASSISD RPD LC- Kernel DCD SNMP vTRIO VFP VCP TCP Source: IT document author N.Caryl
18
vMX Use Cases
19
General consideration of vMX deployment
vMX behaves the same way as the physical MX, so it is capable of being the virtual CPE, or virtual PE, virtual BNG, … (*note-1) Great option as lab equipment for general qualification or function validation In cloud or NFV deployment, when virtualization is the preferred technology Fast service enablement without overhead of installing new HW Solution for scaling services when control plane scale is the bottleneck When network functions are centralized in DC or cloud When service separation is preferred by deploying different routing platforms Note-1: Please see product feature & Perf detail in later slides, also see other comments for where vMX is more a suitable choice.
20
Agility example: Bring up a new service in a POP
New service introduction takes very long Qualify on existing infra New Junos release New service may destabilizing existing service and delay introductions Use vMX to validate new services New service popularity is unpredictable Use vMX to validate service Do not have to distrube current services Validate only the new service only Initial cost is not high, x86 resource can be reused for trial resource for regular reuse If service fail, kill service, without impact to the current one 4. Integrated the new service into existing PE when the service is mature Install a new vMX to start offering a new service without impact to existing platform vMX Scale out the service with vMX quickly if traffic profile fits the requirements Add service directly to the physical MX GW or add more physical MX if service is successful and there is more demand with significant traffic growth POP MX vMX PE SP Network for VPN service PE L3 CPE L3 CPE
21
Proof of concept lab validation or SW certification
CAPEX or OPEX reduction for lab validation or network POC
22
vCPE solution with vMX as the SDN Gateway
vMX as SDN GW router providing support for BGP and overlay tunneling protocols vMX also address the VRF scaling issue for L3 service chaining vCPE service Virtualized services for VPN customer before access to internet or among VPN sites: NAT, Firewall, DPI, caching Service-chain-X1 Service-chain-Y1 DC DC SDN GW SDN GW VPN-starbucks-LA L3 CPE VPN-starbucks-NY New York PE VPN-starbucks-core SP Network for VPN service L3 CPE VPN-starbucks-LA LA PE PE L3 CPE PE VPN-starbucks-core VPN-starbucks-NY New Jersy VPN-starbucks-Hawaii L3 CPE DC GW Las Vegas L3 CPE Chicago L3 CPE Honolulu DC VPN-starbucks-LA VPN-starbucks-NY VPN-starbucks-core VPN-starbucks-Hawaii
23
Virtual Route Reflector
Virtual RR on VMs On standard servers Virtual RR iBGP iBGP Client 3 iBGP Client 1 Client n Client 2 vMX can be used as Route Reflector and deployed the same way as the physical RR in the network vMX can act as both vRR or any typical router function with forwarding capability
24
Virtual BNG cluster in a data center
vMX as vBNG BNG cluster vMX vMX vMX vMX vMX Data Center or CO 10K~100K subscribers Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS); Suitable to perform heavy load BNG control-plane work while there is little BW needed; Pay-as-you-grow model; Rapid Deployment of new BNG router when needed; Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
25
vMX Product Details 25
26
vMX Components Virtual JUNOS to be hosted on a VM
Follows standard JUNOS release cycles Additional software licenses for different control plane applications such Virtual Route Reflector VCP (Virtualized Control Plane) VFP (Virtualized Forwarding Plane) VM that runs the packet forwading engine that is modeled after Trio ASIC Can be hosted on a VM (offer at FRS) OR run as a Linux container (bare-metal) in the future
27
VMX system architecture
VCP VFP Physical NICs Virtual NICs Guest VM (Linux + DPDK) Guest VM (FreeBSD) Hypervisor Cores Memory vSwitch SR-IOV Physical layer Optimized data path from physical NIC to vNIC via SR-IOV (Single Root IO Virtualization). vSwitch for VFP to VCP communication (internal host path) OpenStack (Icehouse) for VM management (Nova) and provisioning of infrastructure network connections (Neutron) Single Root I/O Virtualization - Direct hw access from guest OS to PCIE card. Hypervisior is used only for interrupts. All data is copied through DMA; - Inte;’s network card has a L2 switch that is used to route traffic between VM that are in same host Virtio - Virtio emulates network hardware. Hypervisor is used for interrupts and all data is copied through hypervisior. - All data is routes through hypervisor. Result: SR-IOV network is about 10~15% faster than Virtio when traffic is going to outside network Virtio seems to equal when using only few VMs in same compute node. For example, two VM are sending traffic between them in the same server. For more performance, fine tuning the environment using DPDK, different packet sizes, limi number of VMs, tuning network drivers, etc. OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. Source: IT document author N.Caryl
28
Product Offering at FRS
28
29
FRS Product Offering FRS at Q2 2015 with JUNOS release 14.1R4
Function: Provide feature parity with MX except function related to HA and QOS Performance: SR-IOV w/ PCI pass-through, along with DPDK integration Hypervisor support: KVM VM Implementation: VFP to VCP 1:1 mapping OpenStack integration (to be finalized) VFP VCP Juniper deliverable Hypervisor/Linux Customer defined NIC drivers, DPDK Server, CPU, NIC
30
Reference server configuration
"VT-d" stands for "Intel Virtualization Technology for Directed I/O". The relationship between VT and VT-d is that the former is an "umbrella" term referring to all Intel virtualization technologies and the latter is a particular solution within a suite of solutions under this umbrella. The overall concept behind VT-d is hardware support for isolating and restricting device accesses to the owner of the partition managing the device. A VMM may support various models for I/O virtualization, including emulating the device API, assigning physical I/O devices to VMs, or permitting I/O device sharing in various manners. The key problem is how to isolate device access so that one resource cannot access a device being managed by another resource. VT-d, at the time of this writing, includes four key capabilities 1. I/O device assignment. This feature allows an administrator to assign I/O devices to VMs in any desired configuration. 2. DMA remapping. Supports address translations for device DMA data transfers. 3. Interrupt remapping. Provides VM routing and isolation of device interrupts. 4. Reliability features. Reports and records system software DMA and interrupt erros that may otherwise corrupt memory of impact VM isolation. Note that VT-d is not dependent on VT-x. That is, a VT-x enabled system can operate without VT-d, or without VT-d enabled or configured. You simply miss the benefits of the feature. Many people have asked about this point. CPU Intel Xeon 3.1GHz Cores Minimum 10 cores RAM 20GB OS Ubuntu LTS (w/ libvirt 1.2.2, for better performance, upgrade to 1.2.6) Kernel: Linux generic Libvert: 1.2.6 NICs Intel 82599EB (for 10G) QEMU-KVM Version 2.0 Note: Initially requires minimum 10 Cores: 1 for RE VM, 7 for PFE VM which include 4 packet processing cores, 2 I/O cores, 1 for host0if processing), and 2 cores for RE and PFE emulations (QEMU/KVM) ; Later a version with smaller footprint, less # of cores or RAM required
31
Performance Test setup VMX Tester Setup:
Single instance of VMX with 6 ports of 10G sending bidirectional traffic 16 core total (among those, 6 for I/O, 6 for packet processing) 20G RAM total, 16G memory for vFP process Basic routing is enabled, no filter configured Performance 60G bi-directional traffic per VMX 1500 bytes No packet loss Complete RFC2544 results to follow VMX Tester Test setup 8.9G
32
Thank you
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.