Download presentation
Presentation is loading. Please wait.
Published byЮрий Чевкин Modified over 5 years ago
1
Analytics as a Service (for service assurance with closed loop actions)
Requirement/architecture owner: Srinivasa Addepalli Source: Edge Automation WG Thanks to Ramki, Raghu, Margaret, Chaker, Vijay, Jack, Tom, Donald and Frank for their mindshare, feedback and/or contributions Disclaimer: It is WIP and some details will change as we develop more understanding on integration aspects with DCAE and PNDA
2
Big Picture – Analytics
Reactive Acted on escalation, productivity impact in hours Pro-active Continuous monitoring, act on alerts, little impact on end users, but may miss alerts due to volumes Predictive Identifies and then attempts to prevent SLA-impacting events Need AI (ML/DL) analytics Need Big data framework
3
Traffic Classifica-tion
Prediction Examples Intrusion Detection Traffic Classifica-tion Anomaly Detection Fault Prediction Path Prediction Resource Prediction Traffic Prediction Others
4
Network Analytics – Current method & Challenges
One (or very few) analytics framework deployments All data is fed to central deployment Only VNF and application analytics Not Edge/5G friendly Scalability challenge Lack of faster closed loop control Big Data analytics framework deployment is hard 2 weeks to deploy - Think about multiple deployments 3 to 4 weeks to configure and tune Continuous monitoring is a challenge Not leveraging hardware accelerators such as ML/DL algorithm accelerators & math operations Not leveraging infrastructure metrics (that help in accuracy of prediction)
5
Big Data Analytics Stages
Distribution Store Models DB Collection Agents Data Storage (Data Lake) Training Inferencing Action Correlation Models Action Controllers (External controllers such as Kubernetes*, APPC, SDNC) Training apps Inferencing apps Correlation apps *Other names and brands may be claimed as the property of others.
6
Data Analytics Framework
Needs / Requirements
7
Stack Goals – Decentralization
Cloud Site Central Site Collect Distribute Store Train Predict Correlate Action CENTRAL ANALYTICS Edge Clouds Private Cloud Public Cloud Predict Train Train Action Store Store Small Edge Medium Edge Large Edge Collect Collect Collect Distribute Distribute Distribute Predict Store, Train Correlate Predict(s) Action Action(s) Challenges: Petabytes of data need to be transferred Not edge friendly Data Sovereignty challenges
8
Stack Goals – Bulk Deployment & Configuration
Automation & Configuration Edges could be in thousands Each edge would have its own site orchestrator (e.g., Kubernetes*) Need for centralized automation To deploy various parts of analytics stack in multiple locations. Configure them to make them work together Support for dynamic edges Goal: Deploy and configure big data analytics framework in hours instead of months Medium Edge Predict Large Edge Collect Distribute Train(s) Predict(s) Action(s) Regional Small Edge Action Train Store *Other names and brands may be claimed as the property of others.
9
Stack Goals – Use Known Big Data Frameworks
Goal is to leverage known and open source packages Choose right versions AI (ML/DL) training and inferencing Make them work together Gaps and fixes *Other names and brands may be claimed as the property of others.
10
Stack Goals – Follow Cloud Native and Micro-Services Design Patterns
SECURITY ISTIO* CA, Envoy* proxy for Mutual TLS among PODs. ISTIO ingress for communication outside of clusters. ISTIO RBAC Security away from applications LOAD BALANCING & CI/CD ISTIO Envoy with Cilium acceleration Visualization and monitoring A/B testing & Canary OPERATORS To bring up PODs. To configure using CRDs. FUNCTIONS To enable developer functions to get executed in the pipeline. Native *Other names and brands may be claimed as the property of others.
11
Stack Goals – Reduce training time
Distributed Learning with Data Parallelism Leverage Hardware Accelerators for Performance Leverage NI optimized software libraries
12
Stack Goals – Federated/Distributed Learning
Distributed learning orchestration To honor data sovereignty Data does not leave the site Aggregation server based model TensorFlow* based DL Aggregator Data Owner1 train Data Owner2 *Other names and brands may be claimed as the property of others.
13
Data Analytics Framework
An opinionated stack
14
Deployment and Automation using ONAP4K8S
Stack – Packages & Enhancements to Realize Analytics Stack – Release 6 goals An opinionated stack Helm Charts Deployed using K8S in each site R6 time frame goals Container images – Build from source Few Customized Day0 profiles Day 2 templates for dynamic configuration Security via ISTIO* Deploy with ONAP- K8S. NATS Sample applications Code: demo/tree/master/vnfs/ DAaaS Deployment and Automation using ONAP4K8S Collection Agents Distribution Data Storage (Data Lake) Training Store Models DB CollectD Node Exporter C Advisor Prometheus* Operator HDFS M3DB, M3DB operator Spark*, Spark operator Scikit learn Horovod TF, TF Operator Model Dispatcher Minio* Minio operator Collected Operator HDFS Writer HDFS Operator Inferencing Action Correlation Action Controllers TF and TF Serving operator kNative ML Serving & Operator Messaging Visualization NATS Kafka* Broker Kafka Operator Zookeeper* Grafana* Hue* *Other names and brands may be claimed as the property of others.
15
Roadmap
16
Roadmap / Wishilist Uniform OS across all containers ( ClearLinux or Ubuntu ) Build from source code (Leverage instruction set optimized libraries, Enable security patching etc..) – Get control of container SW packages. Automation – One click deployment of stack in multiple locations Pipelining and DAC support for analytics applications Monitoring – Stack as well as analytics applications Native support – for action functions Federated Learning – to maintain data sovereignty challenges Generic action functions for K8S* and ONAP* Model LCM and Model security Leverage hardware accelerators and OpenVINO™ Integration with ONAP DCAE Inferencing using non-TF (non-DL) *Other names and brands may be claimed as the property of others.
17
Call to Action FEEDBACK… FEEDBACK… FEEDBACK…
It is open source, contributions are welcome. Source code repo is in LFN/ONAP*. But… It is modular and can be deployed without ONAP FEEDBACK… FEEDBACK… FEEDBACK… Try & let us know. New collection mechanisms? Lightweight messaging? Model repositories? Any data lakes? Want to try your data analytics applications? Please contact us. *Other names and brands may be claimed as the property of others.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.