MCORD Team COMPANY NAME PHONE NETCRACKER Neeraj Bhatt

Slides:



Advertisements
Similar presentations
Towards Software Defined Cellular Networks
Advertisements

HetnetIP Ethernet BackHaul Configuration Automation Demo.
Transitioning to IPv6 April 15,2005 Presented By: Richard Moore PBS Enterprise Technology.
LTE-A Carrier Aggregation
Network Based Services in Mobile Networks Context, Typical Use Cases, Problem Area, Requirements IETF 87 Berlin, 29 July 2013 BoF Meeting on Network Service.
Making Cellular Networks Scalable and Flexible Li Erran Li Bell Labs, Alcatel-Lucent Joint work with collaborators at university of Michigan, Princeton,
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
CSCI 530 Lab Firewalls. Overview Firewalls Capabilities Limitations What are we limiting with a firewall? General Network Security Strategies Packet Filtering.
IPv4 and IPv6 Mobility Support Using MPLS and MP-BGP draft-berzin-malis-mpls-mobility-00 Oleg Berzin, Andy Malis {oleg.berzin,
Towards Virtual Routers as a Service 6th GI/ITG KuVS Workshop on “Future Internet” November 22, 2010 Hannover Zdravko Bozakov.
Highly Available Central Services An Intelligent Router Approach Thomas Finnern Thorsten Witt DESY/IT.
Keith Wiles DPACC vNF Overview and Proposed methods Keith Wiles – v0.5.
COS 461: Computer Networks
A Survey of Network Orchestration in Cloud
Omniran IEEE 802 Scope of OmniRAN Date: Authors: NameAffiliationPhone Max RiegelNSN
1 LTE / HSPA / EPC knowledge nuggets Red Banana Wireless Ltd – Copyright GPP Service Data Flows 3GPP Service.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
Cellular Core Network Architecture
Windows Internet Connection Sharing Dave Eitelbach Program Manager Networking And Communications Microsoft Corporation.
FI-WARE – Future Internet Core Platform FI-WARE Interface to Networks and Devices (I2ND) July 2011 High-level description.
Lucent Technologies – Proprietary Use pursuant to company instruction 1 3GPP2 Workshop MMD IMS Architecture June 28, 2005 Anne Y. Lee IMS Systems Engineering.
1 MultimEDia transport for mobIlE Video AppLications 9 th Concertation Meeting Brussels, 13 th February 2012 MEDIEVAL Consortium.
Software-Defined Networks Jennifer Rexford Princeton University.
Understanding 3GPP Bearers LTE / HSPA / EPC ‘knowledge nuggets’ Neil Wiffen - More free downloads at Public.
Chapter 8: Virtual LAN (VLAN)
Cisco 3 - LAN Perrine. J Page 110/20/2015 Chapter 8 VLAN VLAN: is a logical grouping grouped by: function department application VLAN configuration is.
The FI-WARE Project – Base Platform for Future Service Infrastructures FI-WARE Interface to the network and Devices Chapter.
LTE Architecture KANNAN M JTO(3G).
Applicazione del paradigma Diffserv per il controllo della QoS in reti IP: aspetti teorici e sperimentali Stefano Salsano Università di Roma “La Sapienza”
1 SAE architecture harmonization R RAN2/3, SA2 Drafting Group.
1 Presentation_ID © 1999, Cisco Systems, Inc. Cisco All-IP Mobile Wireless Network Reference Model Presentation_ID.
OmniRAN SDN-based OmniRAN Use Cases Summary Date: Authors: NameAffiliationPhone Antonio de la OlivaUC3M+34
Cisco S3C3 Virtual LANS. Why VLANs? You can define groupings of workstations even if separated by switches and on different LAN segments –They are one.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
EXPOSING OVS STATISTICS FOR Q UANTUM USERS Tomer Shani Advanced Topics in Storage Systems Spring 2013.
Extending OVN Forwarding Pipeline Topology-based Service Injection
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
CellSDN: Software-Defined Cellular Core networks Xin Jin Princeton University Joint work with Li Erran Li, Laurent Vanbever, and Jennifer Rexford.
Omniran IEEE 802 Scope of OmniRAN Date: Authors: NameAffiliationPhone Max RiegelNSN
Heikki Lindholm , Lirim Osmani , Sasu Tarkoma , Hannu Flinck*, Ashwin Rao  State Space Analysis to Refactor the Mobile Core  University of Helsinki.
EHRPD-LTE Inter Technology Spectrum Optimization Source: Qualcomm Incorporated Contact: Jun Wang/George Cherian September 9, 2013 Notice ©2013. All rights.
IEEE MEDIA INDEPENDENT HANDOVER DCN: SAUC-ONF-project-desc Title: ONF project descriptions Date Submitted: November 13 th.
OmniRAN IEEE 802 OmniRAN Architecture Proposal Date: Authors: NameAffiliationPhone Yonggang Bo.
Omniran IEEE 802 Scope of OmniRAN Date: Authors: NameAffiliationPhone Max RiegelNSN
Leveraging SDN for The 5G Networks: Trends, Prospects and Challenges ADVISOR: 林甫俊教授 Presenter: Jimmy DATE: 2016/3/21 1.
Fabric: A Retrospective on Evolving SDN Presented by: Tarek Elgamal.
By Suman(1RV12LDC29).  Long Term Evolution (LTE) promises higher data rates, 100Mbps in the downlink and 50Mbps in the uplink in LTE’s first phase, and.
Dynamic Control of Real-Time Communication (RTC) using SDN: A case study of a 5G end-to-end service Samuel Jero, Vijay K. Gurbani, Ray Miller, Bruce Cilli,
Submission May 2016 H. H. LEESlide 1 IEEE Framework and Its Applicability to IMT-2020 Date: Authors:
Outline PART 1: THEORY PART 2: HANDS ON
Chapter 3 LTE Network.
Atrium Router Project Proposal Subhas Mondal, Manoj Nair, Subhash Singh.
Craig Farrell CTO Telecom IBM. Why to operators want SDN and NFV? Definitions SDN: Separate control/management & data plane of switches Centralization.
Automating Wireless IP Network And Virtualized Mobile Core Functions HetnetIP: Wireless IP Backhaul Management Automation and Multi-tenant Portal.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Connectionless Services for M-CORD
+ Timon Sloane VP, Standards & Membership.
3GPP R13 Small Data Delivery
Mobile-CORD Progress and MWC Plan
SDN challenges Deployment challenges
Multi-layer software defined networking in GÉANT
Distributed Mobility Management for Future 5G Networks : Overview and Analysis of Existing Approaches IEEE Wireless Communications January 2015 F. Giust,
University of Maryland College Park
IEEE 802 OmniRAN Study Group: SDN Use Case
SoftMoW: Recursive and Reconfigurable Cellular WAN Architecture
Casablanca Platform Enhancements to Support 5G Use Case (Network Deployment, Slicing, Network Optimization and Automation Framework) 5G Use Case Team.
SDN-based OmniRAN Use Cases Summary
NFV and SD-WAN Multi vendor deployment
Tokyo OpenStack® Summit
Presentation transcript:

MCORD Team COMPANY NAME EMAIL PHONE NETCRACKER Neeraj Bhatt neeraj.bhatt@netcracker.com 781-366-5678 Max Klyus klyus@netcracker.com +7 9168575717 Valentin Plotnichenko valentin@netcracker.com 781-325-5657 RADISYS Joseph Sulistyo sulistyo@radisys.com 408-940-5639 Prashant Sharma prashant.sharma@radisys.com 858-610-9268 Prakash Siva psiva@radisys.com 971-263-8978 CAVIUM Kin-Yip Liu kliu@cavium.com 408-893-5009 Tejas Bhatt tejas.bhatt@caviumnetworks.com AIRHOP Yan Hui yhui@airhopcomm.com 858-207-7235 Hanson On hon@airhopcomm.com NEC Yuta Higuchi y-higuchi@onlab.US 650-207-1464 SKTelecom Mingeun Yoon ymiggy@gmail.com 213-425-6003 AT&T Tom Tofigh tofigh@ATT.com 301-675-6262  

MCORD POC Meeting Agenda (3 Dec 2015) Remind: Solution Provider Both of SGW-C/U will be provided by NetCracker Who will emulate HSS?  Emulated by NEC/NetCracker Action & Discussion Items Logistics - Rack - one at ON.Lab, one at Cavium Topology representation at ONOS Packet tracing (Signaling, Data) Integration Clarify H/W requirements Network configuration/map (range of IPs, Ports for each Nodes) should be given to partners (by Cavium) Interface between VNF manager and XOS Method of traffic classification (for Local vs Non-Local) - Also need to decide whether we need ‘Central EPC’ or not.

Topology & Flow Path (Modified) Statistics GUI For this POC, Only PGW-C will talk to ONOS North-bound ONOS agent OF spine spine PGW-C MME X2, S1-U, S1-MME, S5(GTP-U) Leaf Leaf Leaf SON Internet PGW-D Edge Services (Cache) vBBU eNB (1) vBBU eNB (2) SGW RRU Sector RRU Sector UE UE

Traffic Classification Possible options APN (Access Point Name) MME assigns PGW based on the APN requested by UE (application) Easies way from the point of network However, it needs some setting or application configuration on UE So, this method may be used for predefined users or applications  Not flexible in practice but simple way to showcase Interception Tap the traffic behind the BBU and direct it local VNF if it is intended to be handled at the Edge or send it to the central core otherwise. If we use destination IP for decision rule, it is inside the GTP header  Needs some function or Node which can inspect the packet (GTP) DPI (Deep Packet Inspection) LIPA (Local IP Access) & SIPTO (Selected IP Traffic Offload) Traffic Classification for directing to Local(Edge) Service or to Central Service

Local EPC vs Central EPC Scenario options By using only Video Cache at Mobile Edge (without using EPC) Once the traffic is classified for ‘local’, it is directed to local cache No need of local EPC Doesn’t support Handover. (some solution exist that support handover). But, ‘Okay’ with local usage. By using both local EPC and Cache Need local EPC Support handover Good to showcase distributed core strategy  Conclusion (Suggestion) Use ‘APN’ method for traffic classification Use Emulation (by TeraVM) for central EPC pair

Discussion Agent on ONOS to interact with PGW-C How to handle GTP Tunneling Integration efforts Integration w/o ONOS first. Then, go with XOS/ONOS Rack details Renew APPs APIs, Parameter list XOS : static vs dynamic VM spin-off - Need to show on-demand deployment at least for the initial setup. Project Management - SPRINT, JIRA, Trello

Use Cases (Demo Scenarios to show) #1 Local Enterprise Intranet Services Good way to show the benefit of utilizing localized mobile network infrastructure Use APN assigned to Enterprise employees for internal usage Traffics for designated to the Enterprise will be directed to the ‘Local Enterprise Slice’, while employees can still use normal mobile services through central mobile network Can show local cache, Intranet, local DNS, Analytics etc. Can show handover scenario for both internal and inter-cell (micro~macro) #2 Cell on Demand (ANR) Scenario 2 Cells show on SON GUI Turn lots of U.E traffic form TeraVM Show congestion on Cells from GUI Spin-off new Cell Automatic neighbor & relationship (by SON) Possible MLB(Mobile Load Balancing) execution

VNF-M interworking (NetCracker) NetCracker’s VNF-Manager interfacing with XOS will abstract the complexities of VNFs below it Can Cavium’s vBBU and Radisys’ PGW also be handled in the same way? It will be good If possible Need to find out what should be done from Cavium’s vBBU and Radisys’ PGW

eSON by ACORD framework eSON register on XOS as a service using ACORD vBBUs talk with XOS (connected by using API) eSON will be noticed for the event from BBUs by XOS eSON can access DB formed by XOS/ACORD to get information from vBBUs Same logic can be applied to SGWs, PGWs and MMEs

Works to do

Work to do (summary) EPC integration effort TOSCA model for XOS eSON plug-in for using XOS/ACORD frame (notification) Discovery of Mobile Topology to show GUI VNF-Manager interworking (NetCracker MME & SGW. Also for vBBU & PGW) TeraVM test script

Mobile Edge Project Scope Working Slides Work done by last meeting Mobile Edge Project Scope Working Slides Nov. 25th 2015

Contributions and Responsibilities Key Milestones Dec 15 Feb 1 Feb 15 Mar 1 Mar 13 Mar 20 Multi-vendor Data plane Inter-operation testing ONS2016 Rack Configured ISS Fabric RAN EPC SON Services/demo apps IEEE, AT&T, SK, VZ Integration testing of ONOS + OpenStack + XOS on real HW for all vendors completed M-CORD POD all placed @ ON.Lab All vendors SW/HW integration, M-cord integration Completed Contributions and Responsibilities Infrastructure Software Stack (ONOS + OpenStack + XOS): ON.Lab Fabric: ON.Lab RAN (vBBU): Cavium EPC (PGW, SGW, MME): Radisys and NetCracker/NEC SON: Airhop Test Equipment: Cobohm/Aroflex Services and demo apps: ON.Lab & Aroflex

Team ( Please add the name /emails ) Project lead: 'Bhatt, Tejas' <Tejas.Bhatt@caviumnetworks.com>email, Cell: Developers: Airhop : eSON: NB APIs / Interface to vBBUs Possible interface to PGWY Cavium: Badawy, Farouk <Farouk.Badawy@caviumnetworks.com>; Abdallah, Hossam Hossam.Abdallah@caviumnetworks.com Liu, Kin-Yip <Kin-Yip.Liu@caviumnetworks.com> Radisys: (3) : Joseph Sulistyo Joseph.Sulistyo@radisys.com; Prashant.Sharma@radisys.com NetCracker: ( 3) Neeraj Bhatt <neeraj.bhatt@netcracker.com>; Yuta Higuchi <y-higuchi@onlab.us> Aroflex: Test & Integration support (1) Smith, Jim Jim.Smith@aeroflex.com; Lingg, Michael <Michael.Lingg@aeroflex.com>; Falck, Marko <Marko.Falck@aeroflex.com> ONOS support Team : ( 2-3 ) SB OF Interface Ali, Marc, Charles XOS service chaining Simon, Scott, Tom, Tofigh@att.com, 301 675 6262 Yoon SK: <ymiggy@gmail.com> System Integration & performance optimization Define Demo and configuration environment

ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS CORD Vision Residential Residential Software Stack vOLT, vSG, vRouter, vCDN Enterprise Enterprise Software Stack: VPN, VOD, vCDN, ... 4G Mobility Software Stack : vBBU, VOD, vCDN, vDNS 5G Mobility Stack over Multiple RATs ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS Leaf-Spine Fabric Front- Haul To RRUs BBUs (Multi-RATs) Back Haul Operator Mobile Core RRU

Scope of project for ONS 2016 March 15th Deliverables: M-CORD ( Mobile Edge) Concept Evaluate delay performance & flow control options for disaggregation of EPC Realize the benefit of mobile edge & service chaining 5G Mobility Stack over Multiple RATs 4G LTE: vBBU, vCDN, vDNS Road-map ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS CORD Fabric CORD Fabric Backbone Switch vBBU Video Caching vSGW XOS vSGW MME vBBU vPGW OpenStack vPGW PCRF vBBU ONOS Virtualized BBUs Edge Service Functions Distributed EPC CORD Platform EPC Control Functions Centralized EPC Option: UP/CP separation

What should we demo? Mobile Services at the edge (Video Caching) SON Configuration for optimization of eNBs M-CORD Platform for abstraction of Mobile components Disaggregated EPC model – demonstrate Handover through OF Flow-Table modification

Mobile Edge March Demo Test Scenario: The Mobility Topology (JSON/Python Script) will be pushed to ONOS if not discoverable The topology will include what has been configured in the Mobile Edge Rack ONOS GUI will be used to validate topology as known to ONOS and configured . This includes the actual hardware ( Aero flex as UE Hosts, Spine & leaf as connectivity of mobile RAN and control elements (RRUs, vBBUs, links among vBBUs as representation of RANs RAN connectivity to mobile EPC controls ( MME, SGWs, PGWs, PDN, Service Element ) Initialization Scenarios 1-Use Aroflex to validate no UE attachments exists. 2-Use Aeroflex Traffic generator to initiate x number of UE attachments 3-Use ONOS GUI to show connectivity's are established between UEs and Service Termination points Mobile Edge Cache utilization Scenarios: TBD Handoff Scenario:

Additional Features and functionality needed for each building blocks Identify development gaps ONOS (ON.Lab) XOS (ON.Lab) RRU ad Edge BBUs ( Cavium) Edge P/SGW data plane ( Radisys) MME, PGW Control ( NetCracker) Orchestration control / VNF management ( XOS/ON.Lab, Radisys) eSON application (Airhop) TeraVM test VM (Cobham / Aeroflex) Smartphones / UEs (Cavium) Caching SW (leverage from CORD project)

Update PoC Ownership Diagram Add SGW/RRU/eSON server etc. PoC: Who does what? Mobile CORD ONOS Stack Cavium BBU-C BBU NetCracker Radisys ? Service PGW-C XOS OpenFlow vMME OpenStack S1-MME S11 S1-U vSGW S5 PGW-U SGi Update PoC Ownership Diagram Add SGW/RRU/eSON server etc.

Add physical server location Project Definition Add physical server location for each of the entity Mobile XOS Mobile Edge HW Configuration & Components Demo POD Details PGW-C Radisys Disaggregated Packet Gateway Components Radisys packet gateway solution consist of two components Packet gateway Data Plane Component (PGW-D) Function: Performs gateway functions like GTP tunneling/de-tunneling, packet statistics, charging, lawful intercept, NAT etc. Form Factor: x86 based server, CENTOS 7.0, KVM. Based on OVS architecture. Interfaces 3GPP S5 interface with Serving Gateway. SGi interface with external packet data network. Open Flow 1.4 interface with ONOS controller. Packet gateway Control Plane Component (PGW-C) Function: Implements control aspect of gateway functions like GTPc signaling and application logic. Form Factor: KVM based Virtual machine, CENTOS 7.0. Integration with ONOS north bound interface to program PGW-D. Mobile ONOS CORD Fabric 3x vBBU 2x RRU 1x Front Haul Switch 4 x UE TeraVM EPC (MME) EPC (PGW-D)

POC HW Configuration (need Update) Provider / HW Role & Characteristics XOS/OpenStack ON.Lab / x86 Service Decomposition and Orchestration ONOS ON.Lab / x86 Topology Abstraction, Event Processing & Forwarding Control CORD Fabric ON.Lab / x86 Acceleration for Mobility Service VNF Airhop (TBD) eSON Application Edge: vPGW Data, MME, & PGW Control Radisys NetCracker (x86) Disaggregation of SGW and PGW data plane from control plane Central: vPGW Data, MME, & PGW Control TBD, Simulated (x86) Centralized MME, SGW and PGW data plane ( Home Operators EPC) vBBU (2-3 ) Cavium / ThunderX OF OV Enabled Fronthual Switch ( Ethernet) Optical Cross Connect or Ethernet Switch Interconnect vBBUs to Remote Radio Units RRUs Cavium / OCTEON Fusion Remote Radio Unit providing LTE Uu interface Application Servers EPC Application Client ( UEs) Cobham / Airoflex TeraVM (x86) Emulate Ues, Collect measurements Emulate EPC components

SW Configuration for every components Please Update as much as you can Example eNB/BBU+RRU Band / Bandwidth TX Power eNB IDs (PCI, ECGI) … (Excel spread-sheet with list of parameters used for provisioning, optimization)

Signaling Diagram – System Initialization Possible signal flow diagram(s) for system initialization

Mobile CORD ONOS Data Model Graph: Topology & Sector level Abstraction Applications eSON ONOS-GUI Applications Mobile CORD ONOS Data Model Graph: Topology & Sector level Abstraction Flow table Control Handoff New Attachments S5 S11 fabric vMME S-GW P-GW vPGW-C S1MME Link S1U Link Link RRH vBBU eNB (1) X2 vBBU eNB (4) RRH Sector PDN X2 vBBU eNB(5) X2 Link UE Link Port X2 X2 RRH Sector Port RRH Sector RRH Sector vBBU eNB (2) vBBU eNB(3) RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector

Performance & Delay Budgets ( Update ) Handoff Mobile CORD eSON Real-time update Loops ?? TBD Service Chaining , Signaling Data Path

Backups & Notes

Architecture Goal for future SDN(ONOS) Control for Mobile Network PCRF HSS 3GPP Signaling Traffic Architecture Goal for future ONOS NBI/SBI BBU-C MME SGW-C PGW-C 3GPP Data Traffic ONOS M-CORD Switch Fabric ( Spin & Leaf) Whether we would designate separate VM/Switch for {BBU, SGW, PGW}-D, or let the CORD fabric’s leaf switch have these EPC D-plane element’s role is TBD. BBU SGW-D PGW-D UE Forwarding Controlled by ONOS Chaining managed by XOS RRH Edge Cloud Internet

Initial Network configuration through JSON Topology & Call Flow Initial Network configuration through JSON Statistics UE Attachment Control Flow : 1- UE RRU  MME (3GPP Authentication / Location Update Process. Emulation) 2 - MMERRU (Default Bearer Establishment Procedure Below:) 3 - MMEONOS  SGW/PGW (to create Flow Entry) 4 - UE gets IP through ‘Attach Accept’ (MME  eNB UE) (Assigned from PGW-C) UE During Handover: 1 – UE starts handover request 2 - Standard Handover procedure over X2 static interfaces 3 - MMEONOS  SGW (to ONOS updates Flows for UE) UE Data Flow(Service Request) : 1- UE RRU  ENB UE gets Data Channel assignment 2- MME eNB, S1 Bearer set-up 3. Signals ONOS to create Flow Entry (OF will not be used talk to eNB for establishing UE mobility related Flow Entry? OF is just utilized for making Flow Entry from eNB to SGW, which is static) GUI Architecture Goal for future ONOS agent Event that make ONOS to Change the Flow-Table OF Websockets with JSON spine spine X2, S1-U, S1-MME, S5(GTP-U) Internet SGW-C MME Leaf Leaf Leaf Control Functions Group PGW-C Radisys SGW-D/PGW-D vBBU eNB (1) vBBU eNB (2) eSON Airhop-VM Receives State-updates From eNBs, etc RRU Sector RRU Sector Edge Services (CDN, DNS..) Static Addresses RRU Sector UE RRU Sector UE For POC#1, OF will be mainly used In managing “Data traffic” related Flow Table entries

Topology & Call Flow (in detail-③) Initial Network configuration through JSON Statistics UE Data Flow (Service Request): 1- UE RRU  ENB UE gets Data Channel assignment 2- MME eNB, S1 Bearer set-up (OF will not be used talk to eNB for establishing UE mobility related Flow Entry? OF is just utilized for making Flow Entry from eNB to SGW, which is static)?? GUI Architecture Goal for future ONOS agent Event that make ONOS to Change the Flow-Table OF Websockets with JSON spine spine X2, S1-U, S1-MME, S5(GTP-U) Internet SGW-C MME Leaf Leaf Leaf Control Functions Group PGW-C Radisys SGW-D/PGW-D vBBU eNB (1) vBBU eNB (2) eSON Airhop-VM Receives State-updates From eNBs, etc RRU Sector Sequential arrow diagram will be added RRU Sector Edge Services (CDN, DNS..) Static Addresses RRU Sector UE RRU Sector UE

CORD PoC_1 High Level Overview: Segmented Gateway: Control Path Component Suggestion for PoC Use Case Goal: Flow based selection of centralized or local user plane mobility anchor Simpler: We can keep same LTE interfaces between eNB, MME and SPGW-C. Part of ETSI NFV demo “SDN Enabled Virtual EPC Gateway”. Goal: Signaling optimization for IoT, persisting core network bearer Will need work to optimize S1-MME and S11 signaling, breaks standard interfaces. Connection less communication for IoT device falls in this category (http://web2-clone.research.att.com/export/sites/att_labs/techdocs/TD_101553.pdf) Goal: Effectiveness of Open Flow for SGPW-D user plane Extension for GTP tunnel match and action. Extension for charging capability (raise asynchronous alarm when data used is within certain margin). Extension for hierarchical metering needed in PGW (bearer and APN level metering). Extension for downlink packet buffering needed in SGW-D during paging.

Beyond March Demo Wish List Connectionless Services to minimize the signaling overhead DNS @ the edge Mhealth Applications @ the Edge Need to add scenario to depict Edge Core usage

Mobile Edge Use case Examples: UE/Flow performance QOE visualization Geo-Analysis & behaviors Cell Planning ad Analysis Coverage Analysis Device Impact Radio Block Usage Analysis Network Performance Monitoring Capacity and monitoring & Analysis Correlation & root cause analysis Ad-hoc on demand analysis Self Organizing real time feedback loop Self healing load balancing Energy Saving optimization Auto- neighbor relationship

CORD PoC_1 High Level Overview: Segmented Gateway Main Components of Segmented Gateway PoC Architecture Goal for future Control Path Component MME functions Signaling and control plane of serving and packet gateway. PCRF HSS 3GPP Signaling Traffic ONOS NBI/SBI BBU-C MME SGW-C PGW-C 3GPP Data Traffic ONOS M-CORD Switch Fabric ( Spin & Leaf) Data Path Component Serving Packet gateway data path function. Packet gateway data path function. Vm (OF) Sector/Cell Cavium vBBU-D SGW-D PGW-D UE Ethernet/CPRI Radisys FlowEngine Data/Forwarding Plane Elements Forwarding Controlled by ONOS Chaining managed by XOS RRH 34

CORD PoC_1 High Level Overview: Segmented Gateway: Data Path Component Main Components of Segmented Gateway PoC: Data Path Component Component: Radisys FlowEngine providing data/forwarding components for PGW and SGW Location: Distributed and Centralized EPC CORD Racks Form Factor: x86 COTS server (*and EZchip NPU system for high scale/performance) *NOTE: Radisys FlowEngine also supports deployment as leaf switch. High Level Block Diagram: Software switch, OVS based architecture OF OF

CORD PoC_1 High Level Overview: Segmented Gateway: Data Path Component Main Components of Segmented Gateway PoC: Data Path Component Control Interface: Compliant to Open Flow 1.4 and extensions to support PoC use case. Management Interface: OVSDB based interface. Data Path Function Highlights Support PGW and SGW data path function, scalable for millions of UEs/devices. Flexible definition of gateway function in terms of OF table sequence. Ingress table is responsible for sending signaling plane messages to Signaling application via OF PACKET_IN option. It also identifies whether packet is upstream or downstream and saves direction in metadata. Bearer mapping table performs gateway core function de-tunneling (upstream), tunneling (downstream) packets. Rely on OF extension to support GTP tunnels. For downstream packets, map it to bearer based on Service Data Flow (SDF) and traffic flow template (TFT). Metering (not in PoC) and Charging. Value added service is a placeholder for specific function that can be part of data plane like lawful intercept, NAT, flow redirection for DPI.

Control Path Component CORD PoC_1 High Level Overview: Segmented Gateway: Control Path Component Main Components of Segmented Gateway PoC: Control Path Component Architecture Goal for future Location : Act as one of CORD application. OF Control Path Component Function: Optimized, Colocated MME, SGW and PGW control plane function. Use Case: Optimize LTE signaling overhead Why: LTE core signaling (eNB -> MME, MME -> SGW) during idle to active (and vie versa) transition is not scalable for IoT deployments. How: Make eNB to SGW bearer connection permanent, not to be teared down during UE idle state transition. Additional use case: Connection less communication for IoT device, in case UE and eNB support it. http://web2-clone.research.att.com/export/sites/att_labs/techdocs/TD_101553.pdf Optimize standard LTE interfaces S11, S1-C, S5-C and S8-C. During idle state transition SGW-C (and BBU-c) will not be asked to delete its S1-U bearer, SGW-D keeps the bearer information. OF flow timeout mechanism can be used to tune when bearer gets deleted in SGW-D. (TBD) Interface with vBBU to keep GTP bearer intact during idle transition. When UE goes to active state and established RRC connection, no core signaling is needed to modify bearer in SGW-D. Form Factor: All of MME, SGW-C and PGW-C can be part of a VM with S11, S5-C/S8-C replaced by internal interface.

Action Items: JSON config file to provide the topology to ONOS - future work: Ali implementing link provider (discussion to support mobility) Segment routing ??? Number of flow rules

CORD PoC_1 Decomposed LTE core gateways: Open Items Potential partner for gateway control application Need to work closely to define OF extensions and features need for data plane. Overall use cases of PoC_1 How does packet gateway function fit in overall use case targeted for PoC_1? Is the use case end-to-end voice/video call over CORD network? Does the infrastructure exist to simulate end-to-ends call flows (eNB’s, mobiles, OOT service etc.)? Is deployment options of gateway (access or centralized based on traffic type) part of PoC? Is the goal to show throughput/latency aspect as well? Will data monitoring and charging (via PGW data plane) be part of PoC. Platform What type of x86 server is available to run gateway data plane? Resources & Logistics Resources (and their location) from Radisys perspective. On.Lab resources to coordinate gateways based PoC. Does On.Lab get involved in development effort for Poc? Expectation on what becomes open source

SDN controlled aggregation and backhaul As you mentioned, if all switches/routers within aggregation and backhaul are SDN controlled, we can define proprietary forwarding mechanism (route based on 10.10.0.3).  I see two main challenges with this approach Large Flow Tables: The flow we will create in all switches will be per UE/IoT. This assumes capabilities in all intermediate OF switches to be able to handle large flows (per UE/IoT). With most switches based on COTS switching silicon (like Broadcom), there will be limits on maximum number of flows that can be supported (probably thousands or hundreds of thousands). OF extension: In case we need certain specific OF extensions, all the intermediate switches need to support that.   Do we still need anchor gateway (PGW) Within LTE, the GTP tunneling plays two roles. First one is to enable mobility of devices by preserving its IP address. This one is not needed for stationary IoT devices. Second role of PGW is to act as anchor for UE traffic so that operators services/policies can be applied to UE traffic. This includes services like DPI, NAT, firewall, lawful intercept, charging, parental control etc. provided on SGi interface (implemented via service chaining). I would assume that some of these services are important for IoT devices as well. When tunneling is applied, it’s easy to route all UE/IoT traffic to PGW anchor where all these services can be applied. In a flat network (without any tunneling), is there a way define an single anchor point for specific device’s ingress/egress traffic? This is probably related to first point, if all intermediate switches can be OF controlled at UE/IoT device level, we can define a switch as anchor for device. One option: static, single tunnel from eNB to SGW and SGW to PGW One way to address above concerns is to keep tunnel from eNB to SGW-D (and SGW—D to PGW-D), however, no specific to IoT device. There will be one tunnel from eNB to potential SGW-D’s (and from SGW-D’s to PGW-D’s); this tunnel is created statically. We keep most of procedure same (IP address assignment by PGW, UE using this address for its session). With this approach Only eNB and SGW-D/PGW-D solution need to maintain millions of UE/IoT specific flows. All intermediate switches can be plain white box switch. Controller need to program UE/IoT specific flows in eNB/SGW-D/PGW-D nodes and not it intermediate switches. This reduces complexity in intermediate switches, they continue to route/switch packet based on other header fields. PGW still acts as anchor to apply operators policy and services for IoT device. Still achieves initial goal to remove signaling overhead in individual tunnel creation/deletion for IoT devices. No change to UE software. Not having tunnel per IoT device, certainly limits QoS provisioning on per IoT device basis.

Packet Flow through the MCORD Platform (update)

Backups Preliminary : Operation Steps for Initial Attach

Backups EPS Session Establishment Procedure

Backups EPS Session Establishment Procedure (Con’t)

Backups EPS Session Establishment Procedure (Con’t)

Backups Concept of X2 Handover (Con’t)

Backups Connections and State Transition : Before / During / After X2 Handover

Backups Connections and State Transition : Before / During / After X2 Handover (Con’t)

Backups Handover Preparation

Backups Handover Execution

Backups Handover Completion

Backups Connections and States Before / After Service Request

Backups Procedure for UE Triggered Service Request

Backups Procedure for UE Triggered Service Request (Con’t)

Backups Procedure for Network Triggered Service Request

Backups Procedure for Network Triggered Service Request (Con’t)

Backups Information in Evolved Packet System Entity Before Service Request

Backups Information in Evolved Packet System Entity After Service Request

Questions / Assumptions: Radisys contribution to PoC is around segregated Packet gateway in LTE core network. Idea is to split the traditional gateway in data plane (GTPu, S5/S8) and control plane (GTPc S5/S8, Diameter). The control plane will sit on top of ONOS to program data plane via OF interface. Radisys has data plane solution based on OVS architecture and provides OF/OVSDB interface. We are in discussion on how to handle the control plane aspect. We have the eSON parameter list from Airhop that provide the requirements for ONOS NB APIS eNB to eSON APIS: configure_ue_periodic_measurement_report configure_ue_a4_measurement_report set_pci update_ocn update_q_offset