Download presentation
Presentation is loading. Please wait.
1
Dustin Wu QCT Telco Solution
QCT CORD Ready POD Dustin Wu QCT Telco Solution
2
Same Place with Different Approach
2017: A person holding 13 computers in one hand 1957: 13 people deliver a computer 60 YEARS CHANGE GIVE PEOPLE MORE ABILITY NFV has transitioned from a theory through 2015 into a reality in AT&T has virtualized 5–10% of its functions already, and is on the way to 30% by year-end and 75% by 2020. Telcos can take a year or two to decide they need a new feature, the equipment vendors then design it into their appliances over 18 months, the telco will test it for a year, and only then begin deploying through the base. An enterprise hoping to increase or decrease the speed of its broadband connectivity might have to wait months. AT&T’s John Donovan, the top technical strategy officer, has said that the only way for AT&T to stay in front of data traffic growth is to do it with virtualized software running in centralized data centers. Reducing spending on hardware built for a single purpose such as billing, routing, security, and policy is part of this plan. These functions are all required but can in some cases be done more cheaply using software running on inexpensive servers in a data center. SOURCE:
3
World Data Traffic Forecast
4
Access Edged Aggregated Headquarter
Radio Access Metropolitan Network Wide Area Network Radio Access Access Edged Aggregated Headquarter
5
Evolution of Outdated Edge
Commodity Clouds Users Central Offices Telco & IX Cloud
6
Evolution of Outdated Edge
Commodity Clouds Users Edge Cloud Telco & IX Cloud
7
Great Opportunity for Operators
Commodity Clouds Users Edge Cloud Telco & IX Cloud Subscriber experience dictated from here Human Reaction time: ms Latency to Centralized Cloud: – 400ms Emerging Applications Require Edge Processing AR visual overlays Autonomous Vehicle Coordination IoT Battery Life (50-75% improvement with edge processing) Edge Processing is Vital
8
Access Edged Aggregated Headquarter
Radio Access Metropolitan Network Wide Area Network Radio Access Access Edged Aggregated Headquarter
9
CORD High Level Architecture
vOLT, vSG, vRouter, vCDN vRAN & vEPC SD-WAN & VPN SDN NFV Cloud CORD-XOS Controller Mobile ROADM (Core) Metro Ethernet BBUs PON OLTs Residential Enterprise Shared Cloud Infrastructure Transform Legacy Central Office
10
5Q Timeline of QCT CoSP Event
CORD – Central Office Re-architected as Data Center 5Q Timeline of QCT CoSP Event SDN Controller ONOS VNF/Services Mgmt. XOS White Box Leaf-‐Spine Fabric SDN Fabric Data center with commoditized element Access Large number of COs Cloud SDN NFV Evolved over 40-‐50 years (Core) 300+ Types of equipment Huge source of CAPEX/OPEX
11
All in an easy to consume platform
CORD – Enable The Edge Cloud Economy of a data center - Infrastructure built with a few commodity building blocks using open source software and white boxes Agility of a cloud service provider - Software platforms that enable rapid creation of new services All in an easy to consume platform
12
Purley Platform QCT CORD Ready Server Powered by Residential-CORD
vOLT, vSG, vRouter, vCDN Purley Platform Powered by Mobile-CORD vBBU, vMME, vSGW,vPGW, vCDN Enterprise-CORD vCarrierEthernet, vOAM, vWanEx, vIDS
13
CORD Platform
14
T42S-2U D52B-1U T42D-2U D52BQ-2U Top shelf Xeon® P processor*
Up to 5x PCIe expansion slots Up to 7.68TB memory capacity** Up to 3x PCI Expansion Slots per node Up to 12x hot-swap drive bays Up to 25.6 TB memory capacity * Up to 24 x hot-swap drive bays * With limited conditions ** With 12 pcs 128GB DIMM + 12 pcs 512GB AEP *** AEP with Cascade Lakes CPU 64G * 16 pcs RDIMM + 512G * 48 pcs AEP D52BQ-2U T42D-2U Top Shelf Xeon® Processor Scalable Family*** Top shelf Xeon® P processor* Up to 3x PCI Expansion Slots per node** Up to 10x PCIe expansion slots Up to 30 TB Memory Capacity in 2U System* Up to 7.68TB memory capacity** Up to 26x hot-swap drive bays Total 16x Hot-swap U.2 Drives * 128GB 48 pcs DDR4 RDIMM + 512GB 48 pcs AEP ** 2x Low Profile MD-2+ 1x OCP Mezz per node *** SSD only, without HDD support * With limited conditions ** With 12 pcs 128GB DIMM + 12 pcs 512GB AEP
15
QCT CORD Ready Server vEPC vCDN vIDS : BEST FIT vOLT, vSG, vRouter
QuantaGrid D52B-1U QuantaGrid D52BQ-2U QuantaPlex T42S-2U(4-node) QuantaPlex T42D-2U(4-node) M-CORD vEPC M/R-CORD vCDN R-CORD vOLT, vSG, vRouter E-CORD vIDS Intel® Xeon® Scalable Processors Intel Inside®. New Possibilities Outside. Request a demo with QCT representatives
16
RSD 2.1/Purley - Pool & Compose
Quanta Confidential Network pool GPU/FPGA pool NVMe pool Pooled Resources Compose Storage pool CPU pool
17
5Q Timeline of QCT CoSP Event
Dynamic Deployment 5Q Timeline of QCT CoSP Event up to 300K residences 5K - 15K residences 3K - 10K residences 1K - 3K residences Micro Mini S M - L - XL CORD comes in many sizes a Suitable for different deployment scenarios SOURCE: Ciena
18
QCT BMS As A CORD Fabric Switch
Enable OF-DPA under ONL environment Fabric-OFTest Run “Hardware Conformance Test” cases on QCT switches ( Solve the issues encountered during testing Install switches into CORD POD in QCT Lab Adapt switch bring-up environment to be aligned with auto-deployment process in CORD Fine tune the deployment scripts (e.g. vendor specific OUI setting) Deploy QCT switch within the CORD POD environment Test the traffic between different hosts/VMs across the spine switches
19
Interoperability Test at ONF Lab
UUT RU 2x2 topology Leaf1 and spine1 are QCT UUT, leaf2 and spine2 are 3rd-party RU Bridging Having two hosts on the same leaf within the same subnet Routing (including MPLS) Having two hosts on different leaves within different subnet VLAN Single and double VLAN tagged traffic Spine-1 Spine-2 Leaf-1 Leaf-2 Server-1 Server-2 Servers
20
Deployment Validation in CORD CI/CD Environment
Deployment via Jenkins within the testbed provided by QCT
21
CORD Deployment on QCT CORD POD
Steps to deploy: Download CORD repo on the dev machine Create the CORD dev VM on the dev machine Fetch CORD packages on the dev machine Push the software to the head node Deploy and configure the head node Reboot (to deploy) the compute nodes and the switches Add your configurations Compute node 1 Compute node 2 Internet Spine 1 Head node Spine 2 Leaf 1 Operator / Dev machine Leaf 2 Runs OpenStack head node ONOS XOS ... MASS
22
Leaf/Spine Switch Software Stack
QCT T3048-LY8 Switches for CORD Spine Switch 6 x 40G ports downlink to leaf switches GE mgmt. BRCM ASIC TD2 OF-DPA Indigo OF Agent OpenFlow 1.3 ONL: Open Network Linux ONIE: Open Network Install Environment OF-DPA: OpenFlow Data Path Abstraction Leaf/Spine Switch Software Stack to controller White Box SDN Switch QCT T3048-LY8 OCP Software (ONL,ONIE) Leaf Switch 6 x 40G ports uplink to different spine switches ECMP across all uplink ports QCT Bare Metal Hardware White Box SDN Switch QCT T3048-LY8 GE mgmt. 48 x 10G ports downlink to servers
23
ONOS Controller Cluster
QCT Switch for CORD Residential Mobile Enterprise XOS (Orchestrator) vOLT Control Multicast Control Overlay Control Underlay Control vRouter Control ONOS Controller Cluster CORD-Fabric Open Source SDN-based CLOS Networking T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 Underlay T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 Router Metro T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 Overlay OVS OVS OVS OVS OVS R,E,M-Access vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF
24
Leaf/Spine Switch Software Stack
QCT T7032-IX1 Switches for CORD Spine Switch 32 x 40G/100G ports downlink to leaf switches GE mgmt. BRCM ASIC TH OF-DPA Indigo OF Agent OpenFlow 1.3 ONL: Open Network Linux ONIE: Open Network Install Environment OF-DPA: OpenFlow Data Path Abstraction Leaf/Spine Switch Software Stack to controller White Box SDN Switch QCT T7032-IX1 OCP Software (ONL,ONIE) Leaf Switch 16 x 40G/100G ports uplink to different spine switches ECMP across all uplink ports QCT Bare Metal Hardware White Box SDN Switch QCT T7032-IX1 GE mgmt. 64 x 10G/25G or 16 x 40G/100G ports downlink to servers
25
ONOS Controller Cluster
QCT T7032-IX1 Switches for CORD Residential Mobile Enterprise XOS (Orchestrator) vOLT Control Multicast Control Overlay Control Underlay Control vRouter Control ONOS Controller Cluster CORD-Fabric Open Source SDN-based CLOS Networking T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 Underlay T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 Router Metro T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 Overlay OVS OVS OVS OVS OVS R,E,M-Access vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF
26
Leaf/Spine Switch Software Stack
QCT T7032-IX1 & T3048-LY8 for CORD Spine Switch 32 x 40G ports downlink to leaf switches GE mgmt. BRCM ASIC TH OF-DPA Indigo OF Agent OpenFlow 1.3 ONL: Open Network Linux ONIE: Open Network Install Environment OF-DPA: OpenFlow Data Path Abstraction Leaf/Spine Switch Software Stack to controller White Box SDN Switch QCT T7032-IX1 OCP Software (ONL,ONIE) Leaf Switch 6 x 40G ports uplink to different spine switches ECMP across all uplink ports QCT Bare Metal Hardware White Box SDN Switch QCT T3048-LY8 GE mgmt. 48 x 10G ports downlink to servers
27
ONOS Controller Cluster
QCT T7032-IX1 & T3048-LY8 for CORD Residential Mobile Enterprise XOS (Orchestrator) vOLT Control Multicast Control Overlay Control Underlay Control vRouter Control ONOS Controller Cluster CORD-Fabric Open Source SDN-based CLOS Networking T7032-IX1 T7032-IX1 T7032-IX1 T7032-IX1 Underlay T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 Router Metro T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 T3048-LY8 Overlay OVS OVS OVS OVS OVS R,E,M-Access vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF
28
Servers Configuration 1 Head Node 3 Compute Node
QCT CORD POD – Basic SKU QCT CORD Ready Components Description QTY Servers Configuration 1 Head Node 3 Compute Node System D52B-1U 2.5 Tiered 1 4 CPU Intel Xeon Gold Processor 6140 (18 core, 2.3 GHz) 2 RAM 32GB 2666MHz DDR4 8 HDD/SSD Intel Solid State Drive 480GB SATA 6Gb/s 2.5" Drive NIC 1 Intel PCIe 10G 4 port X710-DA4 NIC 2 Intel PCIe 1G 4 port i350-T4 PSU POWER SUPPLY 800W Switch Fabric Switch BMS T3048-LY8 (B-2-F, AC, Rangeley) Mgmt Switch T1048-LB9 (B-2-F, AC) Cable Fabric Link Cable 40G DAC cable 1M 10G DAC cable 1.5M Mgmt Link Cable RJ45 Cat 5e 12
29
QCT CORD Ready POD – Interconnect
Internet Head Node Compute Node - 1 Compute Node - 2 Compute Node - 3 Compute Node - 4 Fabric Leaf Switch - 1 Fabric Leaf Switch - 2 Fabric Spine Switch - 2 Fabric Spine Switch - 1 Management TOR Leaf – nodes – 10G HA Leaf - Spine Connection – 10G (under planning) Leaf - Spine Fabric – 10/40G Uplink to Internet – 1G Management Network - 1G
30
QCT CORD Ready POD 9U
31
Summary Pre- Configured (BKC) Pre- QCT Pre- Validated Value Integrated
(Full Rack) Pre- Integrated (HW+SW) QCT Value Proposition Pre- Optimized (Workload)
32
What’s Next QCT CORD Ready POD road map
Complete the Fabric Test Guide test cases development: Proactively participate in and contribute to the Brigades: Certification Brigade – CORD BOM Brigade – Performance Brigade –
33
QCT. We Make Cloud Magic Possible.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.