Download presentation
Presentation is loading. Please wait.
1
Task 36b Scope – CPU & Memory (L=ChrisH)
Purpose – add CPU and Memory functionality support to the model Specifically – Includes – DSP, FPGA, GPU ? L1 & L2 cache, Page Files ? Excludes – CPU and Memory physical inventory, which is covered by the existing Equipment model. Excludes Storage External Dependencies – Assumptions – Risks –
2
Team Members Leader - Chris Hartley Members ???
3
IPR Declaration Is there any IPR associated with this presentation NO
NOTICE: This contribution has been prepared to assist the ONF. This document is offered to the ONF as a basis for discussion and is not a binding proposal on Cisco or any other company. The requirements are subject to change in form and numerical value after more study. Cisco specifically reserves the right to add to, amend, or withdraw statements contained herein. THE INFORMATION HEREIN IS PROVIDED “AS IS,” WITHOUT ANY WARRANTIES OR REPRESENTATIONS, EXPRESS, IMPLIED OR STATUTORY, INCLUDING WITHOUT LIMITATION, WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
4
Need for a CPU and Memory Model
This is part of the data center triad : Compute Network Done Storage Started Information Transfer Modification N Y Storage Network Compute ---
5
Why do we care about all this ?
The key questions that need to be supported are Is my application working ? Can users connect to my application ? Managing each separately is easier, but doesn’t help if the issue is a mismatch between the compute and network worlds Applications now are quite different than they were in 1981 Applications now have quite different network requirements than they had in 1981 1981 is an arbitrary date – the year the IBM PC was released
6
We need to support various arrangements of data and instructions
Here Processing Unit = CPU core (or FPGA, DSP, GPU … equivalent) Single / Multiple Instruction streams Data streams
7
We need to support various arrangements of CPU and memory
8
Types of Memory have different speed and capacity tradeoffs
Increasing Capacity (Not to scale) CPU Register Cost per bit also is a factor L1 Cache Decreasing Speed + Latency L2 Cache Non-volatile Main Memory Storage Volatile
9
DMTF CIM 2.48
10
DMTF CIM 2.48
11
SNIA Swordfish / DMTF Redfish
The key Redfish concepts appear to be : Processor - a processor attached to a System Memory - a Memory and its configuration MemoryChunks - a Memory Chunk and its configuration MemoryDomain - used to indicate to the client which Memory (DIMMs) can be grouped together in Memory Chunks to form interleave sets or otherwise grouped together
12
Openconfig
13
Proposed Model – Package Dependencies
Compute Equipment Core
14
Proposed CPU Model If a CPU is symmetric then use one physical entry per CPU. If a CPU is asymmetric, then use one physical entry per group of similar CPU cores For example a CPU may have cores ( 4 * 1.8 GHz Type-A GHz Type-B) and hence 2 entries , each of 4 cores would be used CPU chip(s) may be FRU or non FRU inventory – here we are interested in the compute function
15
Proposed Memory Model Memory chip(s) / SIMM DIMM modules may be FRU or non FRU inventory – here we are interested in the memory function
16
(copied from Storage pack)
Compute Pool Note that the decision was made to have a single compute pool rather than separate Storage, CPU and memory pools, because CPU and memory are usually tightly coupled and the pool can then allocate these consistently Sometimes storage is tightly coupled with CPU and memory and the pool can then allocate these consistently Storage, CPU and memory may not be the best grouping of the pool entries – perhaps physical vs logical, input vs output – starting with a flat set of all the pool entries will allow the best grouping to evolve, rather than imposing an inheritance structure at this stage Note that the pools aren’t hierarchical (deliberately no ComputePool contained in self-join) The associations XxxPoolEntryIsLogical allow an output from one pool to become the input of another pool We really want this to form a directed acyclic graph (no loops)
17
Some Questions What units should we use for memory sizes – bytes, kiB, MiB … ? What units should we use for CPU - Hz, kHz, MHz … ? Note that Kubernetes works in units of CPU, where “One CPU, in Kubernetes, is equivalent to a Hyperthread on a bare-metal Intel processor with Hyperthreading” A CPU hardware thread is also called a vCPU Note that we should try and avoid using float numbers as the rounding proves problematic
18
Compute Pool Group This will allow us to group for instance CPU and Memory that need to be kept together Need to validate this
19
The Compute Model relates to the software using it
20
Compute Examples – Simple Host (1)
1 CPU with 1 core, 1 Memory DMM, 1 local HDD CD = Phy(Chassis) CD = Phy(Blade1) CD = Phy(Blade2) PC RunningOS Running Software Process OS invokes Process PC from RunningSoftware Process CPU Cores Memory Blocks
21
Kubernetes Cluster Service …
Kubernetes “A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources” (eg. main + sidecar). “Each Pod is meant to run a single instance of a given application.” A Kubernetes Pod can request CPU and Memory and also have CPU and Memory Limits set Memory is specified in bytes “The CPU resource is measured in CPU units. One CPU, in Kubernetes, is equivalent to: 1 AWS vCPU 1 GCP Core 1 Azure vCore 1 Hyperthread on a bare-metal Intel processor with Hyperthreading Fractional CPU values are allowed.” Precision finer than 1 milliCPU is not allowed. Cluster Service Note that CPU usage is really policed as the usage (cycles – (milli)seconds of use in a period. Assume we have a limit of 0.1 CPU and we check every 100ms. Assume our process is using 50% CPU. After 100ms we see it has used 50ms of CPU. It is allowed only 10ms of CPU per 100ms (0.1 CPU) , so we pause it for 400ms – so it has used 50ms CPU in 500ms which is 0.1 ! Then we can allow it to run for another 100ms OF course the CPU will likely vary and the scheduling algorithm will adapt for that. Use Namespaces to split a cluster into ‘virtual’ Clusters Container …
22
https://en.wikipedia.org/wiki/Symmetric_multiprocessing
Compute Examples – SMP
23
Compute Examples – CPU split into Cores
24
Compute Examples – Memory linked with virtual memory + swap file
25
Thin (Over) Provisioning complicates capacity management
Often used when allocating capacity from a pool Rather than allocating a value, allocate a minimum and maximum value The min value is reserved and guaranteed (exclusive allocation) The capacity from min to max is not guaranteed (shared allocation) The maximum value can be used to determine the unallocated capacity and also to determine the amount of overprovisioning – the pool may limit the total allocated or use some other algorithm (e.g. allowing 20% overprovisioning)
26
Capacity Allocation Host VM 1 Providing Consuming Max Allocation CPU
Overhead Unallocated Allocated Allocated Guaranteed Allocation Max Allocation CPU VM 2 Overhead Unallocated Allocated Memory Max Allocation Allocated Guaranteed Allocation
27
Resource Capacity Provision
Total Overhead Unallocated Allocated Actual Usage (highly variable, from 0 to Max Allocation Shared Allocation Allocatable Exclusive Allocation Note that some of these values can be derived from the others, so not all need to be stored in the model
28
Resource Capacity Allocation
Max Allocation (limit, may have no limit) Allocated Shared Allocation Actual Usage (highly variable, from 0 to Max Allocation Total consumed Exclusive # Allocation Overhead* Note that some of these values can be derived from the others, so not all need to be stored in the model # = Guaranteed / Reserved Allocation
29
Capacity Clustering & Pooling
Host Cluster Resource Pool 1 Provision VM Host 1 Need to understand how to aggregate and partition allocations Pooled Provision Allocation Provision Allocation Overhead Unallocated Allocated Overhead Unallocated Allocated Allocated Allocated Overhead Unallocated Allocated VM Host 2 Overhead Unallocated Allocated Allocated To another Pool
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.