Task 36b Scope – CPU & Memory (L=ChrisH)

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Chapter 6: Memory Management
Segmentation and Paging Considerations
CS 104 Introduction to Computer Science and Graphics Problems
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
 Design model for a computer  Named after John von Neuman  Instructions that tell the computer what to do are stored in memory  Stored program Memory.
Session objectives Discuss whether or not virtualization makes sense for Exchange 2013 Describe supportability of virtualization features Explain sizing.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 2 Parallel Hardware and Parallel Software An Introduction to Parallel Programming Peter Pacheco.
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Operating Systems A Biswas, Dept. of Information Technology.
System Virtualization Model and Workgroup Update DMTF System Virtualization Partitioning and Clustering WG Updated: September 18, 2006.
Computer Hardware What is a CPU.
GCSE Computing - The CPU
Chapter 7 Memory Management
Models for Resources and Management
Memory Management.
lecture 5: CPU Scheduling
Processes and threads.
Cache Memory.
Chapter 8: Main Memory.
Distributed Processors
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Assembly Language for Intel-Based Computers, 5th Edition
Chapter 8 Main Memory.
Task 4 Scope – Software (L=ChrisH)
Build /21/2018 © 2015 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION.
Chapter 8: Main Memory.
Main Memory Background Swapping Contiguous Allocation Paging
Operating Systems.
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Chapter 5: CPU Scheduling
Threads Chapter 4.
Operating System Chapter 7. Memory Management
Chapter 6: CPU Scheduling
GCSE Computing - The CPU
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Task xx Scope – Connector Pin Strand
COMP755 Advanced Operating Systems
SCCM in hybrid world Predrag Jelesijević Microsoft 7/6/ :17 AM
Microsoft Virtual Academy
Task 57 Scope – Job Task Purpose – Specifically –
Entity vs Datatype.
Task 29 Scope – Party (L=ChrisH)
Task 55 Scope – TOSCA Profile
Task 41 Scope – Identity Implementation (L=Nigel Davis)
Task 36a Scope – Storage (L=ChrisH)
Task 13 Scope – Model Structure (L=ChrisH)
Task 57 Scope – Template and Profile
Task 34 Scope – LTP Port (L=Nigel Davis)
Task 13 Scope – Model Structure (L=ChrisH)
Task 2a Scope – Processing Construct (L=ChrisH)
Task 2b Scope – Processing Construct (L=ChrisH)
Task 34 Scope – LTP Port (L=Nigel Davis)
Task 58 Scope – Occurrence Pattern
Task 30 Scope – Location (L=ChrisH)
Task 57 Scope – Profile and Template
Task xx Scope – Expected Equipment
Task 62 Scope – Config / Operational State
Task xx Scope – Model Extensions
Model Aspect Mechanisms
Task 2b Scope – Processing Construct (L=ChrisH)
Page Main Memory.
Presentation transcript:

Task 36b Scope – CPU & Memory (L=ChrisH) Purpose – add CPU and Memory functionality support to the model Specifically – Includes – DSP, FPGA, GPU ? L1 & L2 cache, Page Files ? Excludes – CPU and Memory physical inventory, which is covered by the existing Equipment model. Excludes Storage External Dependencies – Assumptions – Risks –

Team Members Leader - Chris Hartley chrhartl@cisco.com Members ???

IPR Declaration Is there any IPR associated with this presentation NO NOTICE: This contribution has been prepared to assist the ONF. This document is offered to the ONF as a basis for discussion and is not a binding proposal on Cisco or any other company. The requirements are subject to change in form and numerical value after more study. Cisco specifically reserves the right to add to, amend, or withdraw statements contained herein. THE INFORMATION HEREIN IS PROVIDED “AS IS,” WITHOUT ANY WARRANTIES OR REPRESENTATIONS, EXPRESS, IMPLIED OR STATUTORY, INCLUDING WITHOUT LIMITATION, WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Need for a CPU and Memory Model This is part of the data center triad : Compute Network  Done Storage  Started Information Transfer Modification N Y Storage Network Compute ---

Why do we care about all this ? The key questions that need to be supported are Is my application working ? Can users connect to my application ? Managing each separately is easier, but doesn’t help if the issue is a mismatch between the compute and network worlds Applications now are quite different than they were in 1981 Applications now have quite different network requirements than they had in 1981 1981 is an arbitrary date – the year the IBM PC was released

We need to support various arrangements of data and instructions https://en.wikipedia.org/wiki/Flynn%27s_taxonomy Here Processing Unit = CPU core (or FPGA, DSP, GPU … equivalent) Single / Multiple Instruction streams Data streams

We need to support various arrangements of CPU and memory https://en.wikipedia.org/wiki/Von_Neumann_architecture https://en.wikipedia.org/wiki/Symmetric_multiprocessing

Types of Memory have different speed and capacity tradeoffs Increasing Capacity (Not to scale) CPU Register Cost per bit also is a factor L1 Cache Decreasing Speed + Latency L2 Cache Non-volatile Main Memory Storage Volatile

DMTF CIM 2.48

DMTF CIM 2.48

SNIA Swordfish / DMTF Redfish https://www.snia.org/forums/smi/swordfish http://redfish.dmtf.org/schemas/swordfish/v1/ The key Redfish concepts appear to be : Processor - a processor attached to a System Memory - a Memory and its configuration MemoryChunks - a Memory Chunk and its configuration MemoryDomain - used to indicate to the client which Memory (DIMMs) can be grouped together in Memory Chunks to form interleave sets or otherwise grouped together

Openconfig

Proposed Model – Package Dependencies Compute Equipment Core

Proposed CPU Model If a CPU is symmetric then use one physical entry per CPU. If a CPU is asymmetric, then use one physical entry per group of similar CPU cores For example a CPU may have 4 + 4 cores ( 4 * 1.8 GHz Type-A + 1.4 GHz Type-B) and hence 2 entries , each of 4 cores would be used CPU chip(s) may be FRU or non FRU inventory – here we are interested in the compute function

Proposed Memory Model Memory chip(s) / SIMM DIMM modules may be FRU or non FRU inventory – here we are interested in the memory function

(copied from Storage pack) Compute Pool Note that the decision was made to have a single compute pool rather than separate Storage, CPU and memory pools, because CPU and memory are usually tightly coupled and the pool can then allocate these consistently Sometimes storage is tightly coupled with CPU and memory and the pool can then allocate these consistently Storage, CPU and memory may not be the best grouping of the pool entries – perhaps physical vs logical, input vs output – starting with a flat set of all the pool entries will allow the best grouping to evolve, rather than imposing an inheritance structure at this stage Note that the pools aren’t hierarchical (deliberately no ComputePool contained in self-join) The associations XxxPoolEntryIsLogical allow an output from one pool to become the input of another pool We really want this to form a directed acyclic graph (no loops)

Some Questions What units should we use for memory sizes – bytes, kiB, MiB … ? What units should we use for CPU - Hz, kHz, MHz … ? Note that Kubernetes works in units of CPU, where “One CPU, in Kubernetes, is equivalent to a Hyperthread on a bare-metal Intel processor with Hyperthreading” A CPU hardware thread is also called a vCPU Note that we should try and avoid using float numbers as the rounding proves problematic

Compute Pool Group This will allow us to group for instance CPU and Memory that need to be kept together Need to validate this

The Compute Model relates to the software using it

Compute Examples – Simple Host (1) 1 CPU with 1 core, 1 Memory DMM, 1 local HDD CD = Phy(Chassis) CD = Phy(Blade1) CD = Phy(Blade2) PC RunningOS Running Software Process OS invokes Process PC from RunningSoftware Process CPU Cores Memory Blocks

Kubernetes Cluster Service … https://en.wikipedia.org/wiki/Kubernetes Kubernetes https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/ https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/ https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces “A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources” (eg. main + sidecar). “Each Pod is meant to run a single instance of a given application.” A Kubernetes Pod can request CPU and Memory and also have CPU and Memory Limits set Memory is specified in bytes “The CPU resource is measured in CPU units. One CPU, in Kubernetes, is equivalent to: 1 AWS vCPU 1 GCP Core 1 Azure vCore 1 Hyperthread on a bare-metal Intel processor with Hyperthreading Fractional CPU values are allowed.” Precision finer than 1 milliCPU is not allowed. Cluster Service Note that CPU usage is really policed as the usage (cycles – (milli)seconds of use in a period. Assume we have a limit of 0.1 CPU and we check every 100ms. Assume our process is using 50% CPU. After 100ms we see it has used 50ms of CPU. It is allowed only 10ms of CPU per 100ms (0.1 CPU) , so we pause it for 400ms – so it has used 50ms CPU in 500ms which is 0.1 ! Then we can allow it to run for another 100ms OF course the CPU will likely vary and the scheduling algorithm will adapt for that. Use Namespaces to split a cluster into ‘virtual’ Clusters Container …

https://en.wikipedia.org/wiki/Symmetric_multiprocessing Compute Examples – SMP

Compute Examples – CPU split into Cores

Compute Examples – Memory linked with virtual memory + swap file

Thin (Over) Provisioning complicates capacity management Often used when allocating capacity from a pool Rather than allocating a value, allocate a minimum and maximum value The min value is reserved and guaranteed (exclusive allocation) The capacity from min to max is not guaranteed (shared allocation) The maximum value can be used to determine the unallocated capacity and also to determine the amount of overprovisioning – the pool may limit the total allocated or use some other algorithm (e.g. allowing 20% overprovisioning)

Capacity Allocation Host VM 1 Providing Consuming Max Allocation CPU Overhead Unallocated Allocated Allocated Guaranteed Allocation Max Allocation CPU VM 2 Overhead Unallocated Allocated Memory Max Allocation Allocated Guaranteed Allocation

Resource Capacity Provision Total Overhead Unallocated Allocated Actual Usage (highly variable, from 0 to Max Allocation Shared Allocation Allocatable Exclusive Allocation Note that some of these values can be derived from the others, so not all need to be stored in the model

Resource Capacity Allocation Max Allocation (limit, may have no limit) Allocated Shared Allocation Actual Usage (highly variable, from 0 to Max Allocation Total consumed Exclusive # Allocation Overhead* Note that some of these values can be derived from the others, so not all need to be stored in the model # = Guaranteed / Reserved Allocation

Capacity Clustering & Pooling Host Cluster Resource Pool 1 Provision VM Host 1 Need to understand how to aggregate and partition allocations Pooled Provision Allocation Provision Allocation Overhead Unallocated Allocated Overhead Unallocated Allocated Allocated Allocated Overhead Unallocated Allocated VM Host 2 Overhead Unallocated Allocated Allocated To another Pool