Network Performance Insight 1.2.2 PoC Performance Readout Ambari server performance readout NPI server performance readout With no concurrent users With 3 concurrent users
The following hardware setup for Ambari and NPI server for PoC Host server A: Ambari server ( hostname = in-ibmibm666) Recommended hardware specification: 4 cores 8 GB RAM 100 GB diskspace Host server B: NPI Agent Node 1 PoC ( hostname= in-ibmibm665) 32 GB RAM 4 TB diskspace Figure 1: PoC Deployment
Ambari server performance readout The following shows the CPU and Memory utilization of the Ambari server when the Metric services = OFF
Ambari server ( in-ibmibm666) December 17 Ambari server CPU % average = 1.89%.
Ambari ( in-ibmibm666) December 17 The memory utilization on the Ambari server shows that it does not consume more than 8Gb RAM. Ambari Metric service= OFF.
NPI server performance readout – with no concurrent users The following shows the CPU and Memory utilization of the NPI - PoC server
The NPI hardware setup on slide 2 supports the following traffic load limit: 4,000 fps NetFlow v9, with ALL aggregation ON 8 million ART/QOS records per hour from flow (Application Response Time / Quality of Service ) 5 million SNMP records per hour from ITNM from ~6500 interfaces 450 IPSLA probes churning ~ 430k records per hour
NPI 1.2.2 ( in-ibmibm665) CPU% utilization on Dec 19 average around 60% under the specified load
NPI 1.2.2 ( in-ibmibm665) - Memory utilization by NPI services NPI flow collector NPI Flow Analytics
NPI 1.2.2 ( in-ibmibm665) - Memory utilization by NPI services NPI ITNM collector NPI Entity Analytics
NPI 1.2.2 ( in-ibmibm665) - Memory utilization by NPI services NPI SNMP collector ( Probe ) NPI storage
NPI 1.2.2 ( in-ibmibm665) - Memory utilization by NPI services NPI UI NPI threshold
NPI server performance readout – with 3 concurrent users The following performance readout shows the CPU and memory utilization changes due to users concurrently accessing and using the NPI Dashboards
Performance readout when 3 concurrent users access dashboard Having concurrent users utilizes additional memory and cpu from NPI services. CPU footprint for NPI storage and UI services will grow, along with the memory footprint for Spark-YARN. Estimated CPU% increased from 60% to 65% .
NPI 1.2.2 ( in-ibmibm665) - VisualVM by NPI services – concurrent user =3 NPI flow collector NPI Flow Analytics Memory increased by 50Mb No significant change
NPI 1.2.2 ( in-ibmibm665) - VisualVM by NPI services – concurrent user =3 NPI ITNM collector NPI Entity Analytics No significant change Memory increased by 120Mb
NPI 1.2.2 ( in-ibmibm665) - VisualVM by NPI services – concurrent user =3 NPI SNMP collector ( Probe ) NPI storage No significant change Memory increased by 500Mb
NPI 1.2.2 ( in-ibmibm665) - VisualVM by NPI services – concurrent user =3 NPI UI NPI threshold No significant change Not impacted by concurrent users.