Presentation is loading. Please wait.

Presentation is loading. Please wait.

Title: Robust ONAP Platform Controller for LCM in a Distributed Edge Environment (In Progress) Source: ONAP Architecture Task Force on Edge Automation.

Similar presentations


Presentation on theme: "Title: Robust ONAP Platform Controller for LCM in a Distributed Edge Environment (In Progress) Source: ONAP Architecture Task Force on Edge Automation."— Presentation transcript:

1 Title: Robust ONAP Platform Controller for LCM in a Distributed Edge Environment (In Progress) Source: ONAP Architecture Task Force on Edge Automation - Lead: VMware - Core Team: Amdocs, AT&T, Intel, VMware Others: Fujitsu, Huawei, Nokia, Red Hat, Vodafone, Verizon - Date: March Link:

2 ONAP Orchestrator Feature (High level view)
Dublin All ONAP projects are deployed/bootstrapped through helm under OOM DCAE Service components are deployed using Cloudify/blueprint ONAP deployment is single site HA not enforced as single orchestration function Infrastructure(k8s) is assumed to installed on compute/VM nodes Dynamic instantiation of application limited to DCAE Future Needs Need for multi-site, multi-tenant infrastructure management, deployment and Life-Cycle Management of components either near or co-located at cloud-regions (e.g customer edges, network edges, RICs, core network centers etc..) requiring scale-out across locations. Provide Flexible Policy Driven Configuration and Deployment Management Support for Manager of Manager model to distribute and scale necessary central functions Support for containerized and VM based workload deployment Consistent onboarding and modelling for all management components Single dashboard / system to onboard cloud-regions Real-time Dashboard and inventory of all management apps & components that are deployed and active. Support Geo-Redundant Deployment and Failure Recovery Support for third party management components

3 ONAP Management Application Deployment Requirements (From EA-WG wiki)

4 ONAP Management Application Deployment Requirements – Continued
Miscellaneous - Infrastructure LCM; bring up necessary K8S clusters only if needed

5 ONAP Orchestration – Current functional Gaps (Dublin)
Design No standardized configuration modelling enforced in Helm templates On-demand Service design creation and deployment of single/composed management application Instantiation Control Loop flow deployment and support through CLAMP/Policy/SDC xNF Event based application management (deployment and configuration) Run-time Dynamic configuration management through central ONAP Policy No backward support on Tosca based ONAP application and workflow Dynamic DMAAP topics provisioning/configuration for management application Consolidated view of deployed MS and their relationship Dependency integrity not maintained for un-deployment Platform/Infrastructure Infrastructure management Hierarchical deployment Support for Heterogeneous cloud regions (eg. K8S and Openstack) Maintenance (manual) of chart/values.yaml is not scalable approach for operation Geo-redundancy management support for ONAP components Standardized AAF integration support

6 Distributed Management (ONAP/3rd Party) - Key Use Cases
Analytics/DCAE service at ONAP Edge Closed Loop functions at ONAP Edge Infrastructure LCM across distributed platform (Note 1) Support 3rd party Management component LCM across multiple site Geo-redundancy management support for ONAP components Note 1: Infrastructure LCM – bring up necessary mgmt. K8S clusters only if needed

7 Analytics (3rd Party) Use Case – Exemplary Deployment Scenario
Management Orchestration One training app consists of three services, expected to run in a sequence (DAG based flow requirement) One app requires multiple components in various regions Site Site Training App2 Training App3 Training App1 Visualization Very few Training stack Training stack Model Repo When new workload is brought up, need for configuring existing collection, and inferencing app (Day2 config as a bundle) that are in various regions. (may also require new inferencing app) Site Site Site Inferencing app Inferencing Apps Inferencing Apps In hundreds Inferencing stack Inferencing stack Inferencing stack Site Site Site Site Placement of compute intensive services on nodes that have HW accelerators (GPU, FPGA etc…) Custom Collection service 10s of thousand Collection stack Collection stack Collection stack Collection stack

8 Options Considered for OOM+
Management Application as traditional VNF package (Option #1) To be part of ONAP Central (Running on same K8S Cluster) To share SDC for management app onboarding To share SO for instantiation of management apps as network services To share OOF for placement decisions based on various criteria including cost, HPA etc… To share MC to bring up management apps in various cloud regions Extending DCAE Orchestration for Tosca and Helm (Option#2) Support Central, regional, edge sites equally as far as management apps are concerned Support controller components to be brought up on its own K8S cluster/namespace Cloudify as service orchestrator and supports Tosca workflow Extend dynamic configuration support for Helm deployed components (supported for Tosca based component) Extend support for design flow and configuration modelling for Helm based components. Support Edge/Site provisioning Extend Dashboard and inventory to include management apps deployed on both Helm and Tosca. Support Cloud-native management applications (that leverage Operators, ISTIO, CRDs for Day0 and Day2 config) Extending Cloud Native Ecosystem with ONAP specific functions (Option#3) Support Central, regional, edge sites equally as far as management apps are concerned Use K8S + some active components to acts as service orchestrator Support controller components to be brought up on its own K8S cluster/namespace Build new active components to support dynamic deployment across multiple K8S clusters and dynamic configuration management (Day 2) Align with existing ONAP components for onboarding/configuration modelling (Design flow) Support Cloud-native management applications (that leverage Operators, ISTIO, CRDs for Day0 and Day2 config) Build new entity for support of Tosca based application (backward compatibility) Build new UI for viewing components running across different cluster Option 1 is ruled out for following reasons Not able to support Cloudify-TOSCA based management applications (existing ones) Separation of concerns raising from using some components between VNFs and management apps

9 Combined OOM+ solution - Merging of Option 2 & Option 3 – In Progress
Key Aspects Leverage Cloud Native Ecosystem Operators, CRDs, Istio Service Mesh etc. Support TOSCA Backward compatibility for existing management applications Support Multiple Deployment Models Standalone (Helm and/or TOSCA) SDC (Helm and/or TOSCA) Leverage specific Cloudify components Discussions in progress with Cloudify team


Download ppt "Title: Robust ONAP Platform Controller for LCM in a Distributed Edge Environment (In Progress) Source: ONAP Architecture Task Force on Edge Automation."

Similar presentations


Ads by Google