Download presentation
Presentation is loading. Please wait.
Published byMaria Strickland Modified over 9 years ago
1
MCORD Team COMPANY NAME EMAIL PHONE NETCRACKER Neeraj Bhatt
Max Klyus Valentin Plotnichenko RADISYS Joseph Sulistyo Prashant Sharma Prakash Siva CAVIUM Kin-Yip Liu Tejas Bhatt AIRHOP Yan Hui Hanson On NEC Yuta Higuchi SKTelecom Mingeun Yoon AT&T Tom Tofigh
2
MCORD POC Meeting Agenda (3 Dec 2015)
Remind: Solution Provider Both of SGW-C/U will be provided by NetCracker Who will emulate HSS? Emulated by NEC/NetCracker Action & Discussion Items Logistics - Rack - one at ON.Lab, one at Cavium Topology representation at ONOS Packet tracing (Signaling, Data) Integration Clarify H/W requirements Network configuration/map (range of IPs, Ports for each Nodes) should be given to partners (by Cavium) Interface between VNF manager and XOS Method of traffic classification (for Local vs Non-Local) - Also need to decide whether we need ‘Central EPC’ or not.
3
Topology & Flow Path (Modified)
Statistics GUI For this POC, Only PGW-C will talk to ONOS North-bound ONOS agent OF spine spine PGW-C MME X2, S1-U, S1-MME, S5(GTP-U) Leaf Leaf Leaf SON Internet PGW-D Edge Services (Cache) vBBU eNB (1) vBBU eNB (2) SGW RRU Sector RRU Sector UE UE
4
Traffic Classification
Possible options APN (Access Point Name) MME assigns PGW based on the APN requested by UE (application) Easies way from the point of network However, it needs some setting or application configuration on UE So, this method may be used for predefined users or applications Not flexible in practice but simple way to showcase Interception Tap the traffic behind the BBU and direct it local VNF if it is intended to be handled at the Edge or send it to the central core otherwise. If we use destination IP for decision rule, it is inside the GTP header Needs some function or Node which can inspect the packet (GTP) DPI (Deep Packet Inspection) LIPA (Local IP Access) & SIPTO (Selected IP Traffic Offload) Traffic Classification for directing to Local(Edge) Service or to Central Service
5
Local EPC vs Central EPC
Scenario options By using only Video Cache at Mobile Edge (without using EPC) Once the traffic is classified for ‘local’, it is directed to local cache No need of local EPC Doesn’t support Handover. (some solution exist that support handover). But, ‘Okay’ with local usage. By using both local EPC and Cache Need local EPC Support handover Good to showcase distributed core strategy Conclusion (Suggestion) Use ‘APN’ method for traffic classification Use Emulation (by TeraVM) for central EPC pair
6
Discussion Agent on ONOS to interact with PGW-C
How to handle GTP Tunneling Integration efforts Integration w/o ONOS first. Then, go with XOS/ONOS Rack details Renew APPs APIs, Parameter list XOS : static vs dynamic VM spin-off - Need to show on-demand deployment at least for the initial setup. Project Management - SPRINT, JIRA, Trello
7
Use Cases (Demo Scenarios to show)
#1 Local Enterprise Intranet Services Good way to show the benefit of utilizing localized mobile network infrastructure Use APN assigned to Enterprise employees for internal usage Traffics for designated to the Enterprise will be directed to the ‘Local Enterprise Slice’, while employees can still use normal mobile services through central mobile network Can show local cache, Intranet, local DNS, Analytics etc. Can show handover scenario for both internal and inter-cell (micro~macro) #2 Cell on Demand (ANR) Scenario 2 Cells show on SON GUI Turn lots of U.E traffic form TeraVM Show congestion on Cells from GUI Spin-off new Cell Automatic neighbor & relationship (by SON) Possible MLB(Mobile Load Balancing) execution
8
VNF-M interworking (NetCracker)
NetCracker’s VNF-Manager interfacing with XOS will abstract the complexities of VNFs below it Can Cavium’s vBBU and Radisys’ PGW also be handled in the same way? It will be good If possible Need to find out what should be done from Cavium’s vBBU and Radisys’ PGW
9
eSON by ACORD framework
eSON register on XOS as a service using ACORD vBBUs talk with XOS (connected by using API) eSON will be noticed for the event from BBUs by XOS eSON can access DB formed by XOS/ACORD to get information from vBBUs Same logic can be applied to SGWs, PGWs and MMEs
10
Works to do
11
Work to do (summary) EPC integration effort TOSCA model for XOS
eSON plug-in for using XOS/ACORD frame (notification) Discovery of Mobile Topology to show GUI VNF-Manager interworking (NetCracker MME & SGW. Also for vBBU & PGW) TeraVM test script
12
Mobile Edge Project Scope Working Slides
Work done by last meeting Mobile Edge Project Scope Working Slides Nov. 25th 2015
13
Contributions and Responsibilities
Key Milestones Dec 15 Feb 1 Feb 15 Mar 1 Mar 13 Mar 20 Multi-vendor Data plane Inter-operation testing ONS2016 Rack Configured ISS Fabric RAN EPC SON Services/demo apps IEEE, AT&T, SK, VZ Integration testing of ONOS + OpenStack + XOS on real HW for all vendors completed M-CORD POD all ON.Lab All vendors SW/HW integration, M-cord integration Completed Contributions and Responsibilities Infrastructure Software Stack (ONOS + OpenStack + XOS): ON.Lab Fabric: ON.Lab RAN (vBBU): Cavium EPC (PGW, SGW, MME): Radisys and NetCracker/NEC SON: Airhop Test Equipment: Cobohm/Aroflex Services and demo apps: ON.Lab & Aroflex
14
Team ( Please add the name /emails )
Project lead: 'Bhatt, Tejas' Cell: Developers: Airhop : eSON: NB APIs / Interface to vBBUs Possible interface to PGWY Cavium: Badawy, Farouk Abdallah, Hossam Liu, Kin-Yip Radisys: (3) : Joseph Sulistyo NetCracker: ( 3) Neeraj Bhatt Yuta Higuchi Aroflex: Test & Integration support (1) Smith, Jim Lingg, Michael Falck, Marko ONOS support Team : ( 2-3 ) SB OF Interface Ali, Marc, Charles XOS service chaining Simon, Scott, Tom, Yoon SK: System Integration & performance optimization Define Demo and configuration environment
15
ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS
CORD Vision Residential Residential Software Stack vOLT, vSG, vRouter, vCDN Enterprise Enterprise Software Stack: VPN, VOD, vCDN, ... 4G Mobility Software Stack : vBBU, VOD, vCDN, vDNS 5G Mobility Stack over Multiple RATs ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS Leaf-Spine Fabric Front- Haul To RRUs BBUs (Multi-RATs) Back Haul Operator Mobile Core RRU
16
Scope of project for ONS 2016 March 15th
Deliverables: M-CORD ( Mobile Edge) Concept Evaluate delay performance & flow control options for disaggregation of EPC Realize the benefit of mobile edge & service chaining 5G Mobility Stack over Multiple RATs 4G LTE: vBBU, vCDN, vDNS Road-map ONOS (Virtualization, Slicing) + OpenStack (Multi Domain) + XOS CORD Fabric CORD Fabric Backbone Switch vBBU Video Caching vSGW XOS vSGW MME vBBU vPGW OpenStack vPGW PCRF vBBU ONOS Virtualized BBUs Edge Service Functions Distributed EPC CORD Platform EPC Control Functions Centralized EPC Option: UP/CP separation
17
What should we demo? Mobile Services at the edge (Video Caching)
SON Configuration for optimization of eNBs M-CORD Platform for abstraction of Mobile components Disaggregated EPC model – demonstrate Handover through OF Flow-Table modification
18
Mobile Edge March Demo Test Scenario:
The Mobility Topology (JSON/Python Script) will be pushed to ONOS if not discoverable The topology will include what has been configured in the Mobile Edge Rack ONOS GUI will be used to validate topology as known to ONOS and configured . This includes the actual hardware ( Aero flex as UE Hosts, Spine & leaf as connectivity of mobile RAN and control elements (RRUs, vBBUs, links among vBBUs as representation of RANs RAN connectivity to mobile EPC controls ( MME, SGWs, PGWs, PDN, Service Element ) Initialization Scenarios 1-Use Aroflex to validate no UE attachments exists. 2-Use Aeroflex Traffic generator to initiate x number of UE attachments 3-Use ONOS GUI to show connectivity's are established between UEs and Service Termination points Mobile Edge Cache utilization Scenarios: TBD Handoff Scenario:
19
Additional Features and functionality needed for each building blocks Identify development gaps
ONOS (ON.Lab) XOS (ON.Lab) RRU ad Edge BBUs ( Cavium) Edge P/SGW data plane ( Radisys) MME, PGW Control ( NetCracker) Orchestration control / VNF management ( XOS/ON.Lab, Radisys) eSON application (Airhop) TeraVM test VM (Cobham / Aeroflex) Smartphones / UEs (Cavium) Caching SW (leverage from CORD project)
20
Update PoC Ownership Diagram Add SGW/RRU/eSON server etc.
PoC: Who does what? Mobile CORD ONOS Stack Cavium BBU-C BBU NetCracker Radisys ? Service PGW-C XOS OpenFlow vMME OpenStack S1-MME S11 S1-U vSGW S5 PGW-U SGi Update PoC Ownership Diagram Add SGW/RRU/eSON server etc.
21
Add physical server location
Project Definition Add physical server location for each of the entity Mobile XOS Mobile Edge HW Configuration & Components Demo POD Details PGW-C Radisys Disaggregated Packet Gateway Components Radisys packet gateway solution consist of two components Packet gateway Data Plane Component (PGW-D) Function: Performs gateway functions like GTP tunneling/de-tunneling, packet statistics, charging, lawful intercept, NAT etc. Form Factor: x86 based server, CENTOS 7.0, KVM. Based on OVS architecture. Interfaces 3GPP S5 interface with Serving Gateway. SGi interface with external packet data network. Open Flow 1.4 interface with ONOS controller. Packet gateway Control Plane Component (PGW-C) Function: Implements control aspect of gateway functions like GTPc signaling and application logic. Form Factor: KVM based Virtual machine, CENTOS 7.0. Integration with ONOS north bound interface to program PGW-D. Mobile ONOS CORD Fabric 3x vBBU 2x RRU 1x Front Haul Switch 4 x UE TeraVM EPC (MME) EPC (PGW-D)
22
POC HW Configuration (need Update)
Provider / HW Role & Characteristics XOS/OpenStack ON.Lab / x86 Service Decomposition and Orchestration ONOS ON.Lab / x86 Topology Abstraction, Event Processing & Forwarding Control CORD Fabric ON.Lab / x86 Acceleration for Mobility Service VNF Airhop (TBD) eSON Application Edge: vPGW Data, MME, & PGW Control Radisys NetCracker (x86) Disaggregation of SGW and PGW data plane from control plane Central: vPGW Data, MME, & PGW Control TBD, Simulated (x86) Centralized MME, SGW and PGW data plane ( Home Operators EPC) vBBU (2-3 ) Cavium / ThunderX OF OV Enabled Fronthual Switch ( Ethernet) Optical Cross Connect or Ethernet Switch Interconnect vBBUs to Remote Radio Units RRUs Cavium / OCTEON Fusion Remote Radio Unit providing LTE Uu interface Application Servers EPC Application Client ( UEs) Cobham / Airoflex TeraVM (x86) Emulate Ues, Collect measurements Emulate EPC components
23
SW Configuration for every components
Please Update as much as you can Example eNB/BBU+RRU Band / Bandwidth TX Power eNB IDs (PCI, ECGI) … (Excel spread-sheet with list of parameters used for provisioning, optimization)
24
Signaling Diagram – System Initialization
Possible signal flow diagram(s) for system initialization
25
Mobile CORD ONOS Data Model Graph: Topology & Sector level Abstraction
Applications eSON ONOS-GUI Applications Mobile CORD ONOS Data Model Graph: Topology & Sector level Abstraction Flow table Control Handoff New Attachments S5 S11 fabric vMME S-GW P-GW vPGW-C S1MME Link S1U Link Link RRH vBBU eNB (1) X2 vBBU eNB (4) RRH Sector PDN X2 vBBU eNB(5) X2 Link UE Link Port X2 X2 RRH Sector Port RRH Sector RRH Sector vBBU eNB (2) vBBU eNB(3) RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector RRH Sector
26
Performance & Delay Budgets ( Update )
Handoff Mobile CORD eSON Real-time update Loops ?? TBD Service Chaining , Signaling Data Path
27
Backups & Notes
28
Architecture Goal for future
SDN(ONOS) Control for Mobile Network PCRF HSS 3GPP Signaling Traffic Architecture Goal for future ONOS NBI/SBI BBU-C MME SGW-C PGW-C 3GPP Data Traffic ONOS M-CORD Switch Fabric ( Spin & Leaf) Whether we would designate separate VM/Switch for {BBU, SGW, PGW}-D, or let the CORD fabric’s leaf switch have these EPC D-plane element’s role is TBD. BBU SGW-D PGW-D UE Forwarding Controlled by ONOS Chaining managed by XOS RRH Edge Cloud Internet
29
Initial Network configuration through JSON
Topology & Call Flow Initial Network configuration through JSON Statistics UE Attachment Control Flow : 1- UE RRU MME (3GPP Authentication / Location Update Process. Emulation) 2 - MMERRU (Default Bearer Establishment Procedure Below:) 3 - MMEONOS SGW/PGW (to create Flow Entry) 4 - UE gets IP through ‘Attach Accept’ (MME eNB UE) (Assigned from PGW-C) UE During Handover: 1 – UE starts handover request 2 - Standard Handover procedure over X2 static interfaces 3 - MMEONOS SGW (to ONOS updates Flows for UE) UE Data Flow(Service Request) : 1- UE RRU ENB UE gets Data Channel assignment 2- MME eNB, S1 Bearer set-up 3. Signals ONOS to create Flow Entry (OF will not be used talk to eNB for establishing UE mobility related Flow Entry? OF is just utilized for making Flow Entry from eNB to SGW, which is static) GUI Architecture Goal for future ONOS agent Event that make ONOS to Change the Flow-Table OF Websockets with JSON spine spine X2, S1-U, S1-MME, S5(GTP-U) Internet SGW-C MME Leaf Leaf Leaf Control Functions Group PGW-C Radisys SGW-D/PGW-D vBBU eNB (1) vBBU eNB (2) eSON Airhop-VM Receives State-updates From eNBs, etc RRU Sector RRU Sector Edge Services (CDN, DNS..) Static Addresses RRU Sector UE RRU Sector UE For POC#1, OF will be mainly used In managing “Data traffic” related Flow Table entries
30
Topology & Call Flow (in detail-③)
Initial Network configuration through JSON Statistics UE Data Flow (Service Request): 1- UE RRU ENB UE gets Data Channel assignment 2- MME eNB, S1 Bearer set-up (OF will not be used talk to eNB for establishing UE mobility related Flow Entry? OF is just utilized for making Flow Entry from eNB to SGW, which is static)?? GUI Architecture Goal for future ONOS agent Event that make ONOS to Change the Flow-Table OF Websockets with JSON spine spine X2, S1-U, S1-MME, S5(GTP-U) Internet SGW-C MME Leaf Leaf Leaf Control Functions Group PGW-C Radisys SGW-D/PGW-D vBBU eNB (1) vBBU eNB (2) eSON Airhop-VM Receives State-updates From eNBs, etc RRU Sector Sequential arrow diagram will be added RRU Sector Edge Services (CDN, DNS..) Static Addresses RRU Sector UE RRU Sector UE
31
CORD PoC_1 High Level Overview: Segmented Gateway: Control Path Component
Suggestion for PoC Use Case Goal: Flow based selection of centralized or local user plane mobility anchor Simpler: We can keep same LTE interfaces between eNB, MME and SPGW-C. Part of ETSI NFV demo “SDN Enabled Virtual EPC Gateway”. Goal: Signaling optimization for IoT, persisting core network bearer Will need work to optimize S1-MME and S11 signaling, breaks standard interfaces. Connection less communication for IoT device falls in this category ( Goal: Effectiveness of Open Flow for SGPW-D user plane Extension for GTP tunnel match and action. Extension for charging capability (raise asynchronous alarm when data used is within certain margin). Extension for hierarchical metering needed in PGW (bearer and APN level metering). Extension for downlink packet buffering needed in SGW-D during paging.
32
Beyond March Demo Wish List
Connectionless Services to minimize the signaling overhead the edge Mhealth the Edge Need to add scenario to depict Edge Core usage
33
Mobile Edge Use case Examples:
UE/Flow performance QOE visualization Geo-Analysis & behaviors Cell Planning ad Analysis Coverage Analysis Device Impact Radio Block Usage Analysis Network Performance Monitoring Capacity and monitoring & Analysis Correlation & root cause analysis Ad-hoc on demand analysis Self Organizing real time feedback loop Self healing load balancing Energy Saving optimization Auto- neighbor relationship
34
CORD PoC_1 High Level Overview: Segmented Gateway
Main Components of Segmented Gateway PoC Architecture Goal for future Control Path Component MME functions Signaling and control plane of serving and packet gateway. PCRF HSS 3GPP Signaling Traffic ONOS NBI/SBI BBU-C MME SGW-C PGW-C 3GPP Data Traffic ONOS M-CORD Switch Fabric ( Spin & Leaf) Data Path Component Serving Packet gateway data path function. Packet gateway data path function. Vm (OF) Sector/Cell Cavium vBBU-D SGW-D PGW-D UE Ethernet/CPRI Radisys FlowEngine Data/Forwarding Plane Elements Forwarding Controlled by ONOS Chaining managed by XOS RRH 34
35
CORD PoC_1 High Level Overview: Segmented Gateway: Data Path Component
Main Components of Segmented Gateway PoC: Data Path Component Component: Radisys FlowEngine providing data/forwarding components for PGW and SGW Location: Distributed and Centralized EPC CORD Racks Form Factor: x86 COTS server (*and EZchip NPU system for high scale/performance) *NOTE: Radisys FlowEngine also supports deployment as leaf switch. High Level Block Diagram: Software switch, OVS based architecture OF OF
36
CORD PoC_1 High Level Overview: Segmented Gateway: Data Path Component
Main Components of Segmented Gateway PoC: Data Path Component Control Interface: Compliant to Open Flow 1.4 and extensions to support PoC use case. Management Interface: OVSDB based interface. Data Path Function Highlights Support PGW and SGW data path function, scalable for millions of UEs/devices. Flexible definition of gateway function in terms of OF table sequence. Ingress table is responsible for sending signaling plane messages to Signaling application via OF PACKET_IN option. It also identifies whether packet is upstream or downstream and saves direction in metadata. Bearer mapping table performs gateway core function de-tunneling (upstream), tunneling (downstream) packets. Rely on OF extension to support GTP tunnels. For downstream packets, map it to bearer based on Service Data Flow (SDF) and traffic flow template (TFT). Metering (not in PoC) and Charging. Value added service is a placeholder for specific function that can be part of data plane like lawful intercept, NAT, flow redirection for DPI.
37
Control Path Component
CORD PoC_1 High Level Overview: Segmented Gateway: Control Path Component Main Components of Segmented Gateway PoC: Control Path Component Architecture Goal for future Location : Act as one of CORD application. OF Control Path Component Function: Optimized, Colocated MME, SGW and PGW control plane function. Use Case: Optimize LTE signaling overhead Why: LTE core signaling (eNB -> MME, MME -> SGW) during idle to active (and vie versa) transition is not scalable for IoT deployments. How: Make eNB to SGW bearer connection permanent, not to be teared down during UE idle state transition. Additional use case: Connection less communication for IoT device, in case UE and eNB support it. Optimize standard LTE interfaces S11, S1-C, S5-C and S8-C. During idle state transition SGW-C (and BBU-c) will not be asked to delete its S1-U bearer, SGW-D keeps the bearer information. OF flow timeout mechanism can be used to tune when bearer gets deleted in SGW-D. (TBD) Interface with vBBU to keep GTP bearer intact during idle transition. When UE goes to active state and established RRC connection, no core signaling is needed to modify bearer in SGW-D. Form Factor: All of MME, SGW-C and PGW-C can be part of a VM with S11, S5-C/S8-C replaced by internal interface.
38
Action Items: JSON config file to provide the topology to ONOS
- future work: Ali implementing link provider (discussion to support mobility) Segment routing ??? Number of flow rules
39
CORD PoC_1 Decomposed LTE core gateways: Open Items
Potential partner for gateway control application Need to work closely to define OF extensions and features need for data plane. Overall use cases of PoC_1 How does packet gateway function fit in overall use case targeted for PoC_1? Is the use case end-to-end voice/video call over CORD network? Does the infrastructure exist to simulate end-to-ends call flows (eNB’s, mobiles, OOT service etc.)? Is deployment options of gateway (access or centralized based on traffic type) part of PoC? Is the goal to show throughput/latency aspect as well? Will data monitoring and charging (via PGW data plane) be part of PoC. Platform What type of x86 server is available to run gateway data plane? Resources & Logistics Resources (and their location) from Radisys perspective. On.Lab resources to coordinate gateways based PoC. Does On.Lab get involved in development effort for Poc? Expectation on what becomes open source
40
SDN controlled aggregation and backhaul
As you mentioned, if all switches/routers within aggregation and backhaul are SDN controlled, we can define proprietary forwarding mechanism (route based on ). I see two main challenges with this approach Large Flow Tables: The flow we will create in all switches will be per UE/IoT. This assumes capabilities in all intermediate OF switches to be able to handle large flows (per UE/IoT). With most switches based on COTS switching silicon (like Broadcom), there will be limits on maximum number of flows that can be supported (probably thousands or hundreds of thousands). OF extension: In case we need certain specific OF extensions, all the intermediate switches need to support that. Do we still need anchor gateway (PGW) Within LTE, the GTP tunneling plays two roles. First one is to enable mobility of devices by preserving its IP address. This one is not needed for stationary IoT devices. Second role of PGW is to act as anchor for UE traffic so that operators services/policies can be applied to UE traffic. This includes services like DPI, NAT, firewall, lawful intercept, charging, parental control etc. provided on SGi interface (implemented via service chaining). I would assume that some of these services are important for IoT devices as well. When tunneling is applied, it’s easy to route all UE/IoT traffic to PGW anchor where all these services can be applied. In a flat network (without any tunneling), is there a way define an single anchor point for specific device’s ingress/egress traffic? This is probably related to first point, if all intermediate switches can be OF controlled at UE/IoT device level, we can define a switch as anchor for device. One option: static, single tunnel from eNB to SGW and SGW to PGW One way to address above concerns is to keep tunnel from eNB to SGW-D (and SGW—D to PGW-D), however, no specific to IoT device. There will be one tunnel from eNB to potential SGW-D’s (and from SGW-D’s to PGW-D’s); this tunnel is created statically. We keep most of procedure same (IP address assignment by PGW, UE using this address for its session). With this approach Only eNB and SGW-D/PGW-D solution need to maintain millions of UE/IoT specific flows. All intermediate switches can be plain white box switch. Controller need to program UE/IoT specific flows in eNB/SGW-D/PGW-D nodes and not it intermediate switches. This reduces complexity in intermediate switches, they continue to route/switch packet based on other header fields. PGW still acts as anchor to apply operators policy and services for IoT device. Still achieves initial goal to remove signaling overhead in individual tunnel creation/deletion for IoT devices. No change to UE software. Not having tunnel per IoT device, certainly limits QoS provisioning on per IoT device basis.
41
Packet Flow through the MCORD Platform (update)
42
Backups Preliminary : Operation Steps for Initial Attach
43
Backups EPS Session Establishment Procedure
44
Backups EPS Session Establishment Procedure (Con’t)
45
Backups EPS Session Establishment Procedure (Con’t)
46
Backups Concept of X2 Handover (Con’t)
47
Backups Connections and State Transition : Before / During / After X2 Handover
48
Backups Connections and State Transition : Before / During / After X2 Handover (Con’t)
49
Backups Handover Preparation
50
Backups Handover Execution
51
Backups Handover Completion
52
Backups Connections and States Before / After Service Request
53
Backups Procedure for UE Triggered Service Request
54
Backups Procedure for UE Triggered Service Request (Con’t)
55
Backups Procedure for Network Triggered Service Request
56
Backups Procedure for Network Triggered Service Request (Con’t)
57
Backups Information in Evolved Packet System Entity Before Service Request
58
Backups Information in Evolved Packet System Entity After Service Request
59
Questions / Assumptions:
Radisys contribution to PoC is around segregated Packet gateway in LTE core network. Idea is to split the traditional gateway in data plane (GTPu, S5/S8) and control plane (GTPc S5/S8, Diameter). The control plane will sit on top of ONOS to program data plane via OF interface. Radisys has data plane solution based on OVS architecture and provides OF/OVSDB interface. We are in discussion on how to handle the control plane aspect. We have the eSON parameter list from Airhop that provide the requirements for ONOS NB APIS eNB to eSON APIS: configure_ue_periodic_measurement_report configure_ue_a4_measurement_report set_pci update_ocn update_q_offset
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.