NVM Programming Model. 2 Emerging Persistent Memory Technologies Phase change memory Heat changes memory cells between crystalline and amorphous states.

Slides:



Advertisements
Similar presentations
Virtualisation From the Bottom Up From storage to application.
Advertisements

System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Better I/O Through Byte-Addressable, Persistent Memory
1 Lecture 15: DRAM Design Today: DRAM basics, DRAM innovations (Section 5.3)
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
Introduction to Systems Architecture Kieran Mathieson.
Memory Management CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Computer System Overview
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
Chapter 12 File Management Systems
Connecting HPIO Capabilities with Domain Specific Needs Rob Ross MCS Division Argonne National Laboratory
Lecture 1: Introduction CS170 Spring 2015 Chapter 1, the text book. T. Yang.
The SNIA NVM Programming Model
Operating Systems Concepts 1. A Computer Model An operating system has to deal with the fact that a computer is made up of a CPU, random access memory.
Xen and the Art of Virtualization. Introduction  Challenges to build virtual machines Performance isolation  Scheduling priority  Memory demand  Network.
SymCall: Symbiotic Virtualization Through VMM-to-Guest Upcalls John R. Lange and Peter Dinda University of Pittsburgh (CS) Northwestern University (EECS)
SRP Update Bart Van Assche,.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
Operating Systems and Networks AE4B33OSS Introduction.
Swapping to Remote Memory over InfiniBand: An Approach using a High Performance Network Block Device Shuang LiangRanjit NoronhaDhabaleswar K. Panda IEEE.
+ CS 325: CS Hardware and Software Organization and Architecture Memory Organization.
Supporting Multi-Processors Bernard Wong February 17, 2003.
1 Public DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience Arkady Kanevsky & Peter Corbett Network Appliance Vijay Velusamy.
Chapter 2 Introduction to OS Chien-Chung Shen CIS, UD
VMware vSphere Configuration and Management v6
SDM Center Parallel I/O Storage Efficient Access Team.
Disco: Running Commodity Operating Systems on Scalable Multiprocessors Presented by: Pierre LaBorde, Jordan Deveroux, Imran Ali, Yazen Ghannam, Tzu-Wei.
System Architecture Directions for Networked Sensors.
Chapter 2 Introduction to OS Chien-Chung Shen CIS/UD
2015 Storage Developer Conference. © Intel Corporation. All Rights Reserved. RDMA with PMEM Software mechanisms for enabling access to remote persistent.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
Software Defined Datacenter – from Vision to Solution
Storage Systems CSE 598d, Spring 2007 Lecture 13: File Systems March 8, 2007.
OFA OpenFabrics Interfaces Project Data Storage/Data Access subgroup October 2015 Summarizing NVM Usage Models for DS/DA.
Lesson 4 – Cache, virtual and flash memory
Welcome to Intro to Operating Systems Course Website:
System Virtualization Model and Workgroup Update DMTF System Virtualization Partitioning and Clustering WG Updated: September 18, 2006.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Persistent Memory (PM)
sponsored by HP Enterprise
CS 540 Database Management Systems
Failure-Atomic Slotted Paging for Persistent Memory
Processes and threads.
Using non-volatile memory (NVDIMM-N) as block storage in Windows Server 2016 Tobias Klima Program Manager.
Persistent Memory over Fabrics
Windows Azure Migrating SQL Server Workloads
Microsoft Build /12/2018 5:05 AM Using non-volatile memory (NVDIMM-N) as byte-addressable storage in Windows Server 2016 Tobias Klima Program Manager.
RDMA Extensions for Persistency and Consistency
Persistent memory support
Persistent Memory From Samples to Mainstream Adoption
Better I/O Through Byte-Addressable, Persistent Memory
Influence of Cheap and Fast NVRAM on Linux Kernel Architecture
COT 5611 Operating Systems Design Principles Spring 2014
COT 5611 Operating Systems Design Principles Spring 2012
COT 4600 Operating Systems Fall 2009
OpenFabrics Alliance An Update for SSSI
Lecture 6: Reliability, PCM
GCSE OCR 4 Storage Computer Science J276 Unit 1
Storage Systems Sudhanva Gurumurthi.
2.C Memory GCSE Computing Langley Park School for Boys.
CSC3050 – Computer Architecture
Lecture 11: Flash Memory and File System Abstraction
NVMe.
Cache Memory and Performance
Chapter 13: I/O Systems.
CS 295: Modern Systems Storage Technologies Introduction
COT 5611 Operating Systems Design Principles Spring 2014
Endurance Group Management: Host control of SSD Media Organization
Presentation transcript:

NVM Programming Model

2 Emerging Persistent Memory Technologies Phase change memory Heat changes memory cells between crystalline and amorphous states Spin torque memory/MRAM Magnetic layers within memory cells are aligned or juxtaposed Resistive RAM Ion concentrations are migrated within memory cells to change resistance All of these can have latencies close enough to DRAM to allow them to be accessed like DRAM.

3 PM Brings Storage To Memory Slots PM Fast Like Memory Durable Like Storage Persistent Memory Vision

4 Non- Blocking Application View of IO Elimination NUMA Software overheads are being driven to keep pace with devices NUMA latencies up to 200 nS have historically been tolerated Anything above 2-3 uS will probably need to context switch Latencies below these thresholds cause disruption

5 SNIA NVM Programming Model Version 1 Approved by SNIA in December Downloadable by anyone Expose new features of block and file to applications Atomicity capability and granularity Thin provisioning management Use of memory mapped files for persistent memory Existing abstraction that can act as a bridge to higher value from persistent memory Limits the scope of re-invention from an application point of view Open source implementations already available for incremental innovation (e.g. PMFS) Programming Model, not API Describes behaviors in terms of actions Facilitates discovery of capabilities using attributes Usage illustrated as Use Cases Implementations map actions and attributes to API elements

6 Conventional Block and File Modes BLOCK mode describes extensions: Atomic write features Granularities (length, alignment) Thin Provisioning Management FILE mode describes extensions: Discovery and use of atomic write features The discovery of granularities (length, alignment characteristics) Memory Mapping in NVM.FILE mode uses volatile pages and writes them to disk or SSD

7 Persistent Memory Modes NVM.PM.VOLUME mode provides a software abstraction to OS components for Persistent Memory (PM) hardware: list of physical address ranges for each PM volume Thin provisioning management NVM.PM.FILE mode describes the behavior for applications accessing persistent memory including: mapping PM files (or subsets of files) to virtual memory addresses syncing portions of PM files to the persistence domain Memory Mapping in NVM.PM.FILE mode enables direct access to persistent memory using CPU instructions

8 What's next for NVM programming? Beyond version 1, three work items are under investigation 1)Software hints – Application usage, access patterns – Optimization based on discovered device attributes – Present hints emerging in standards (SCSI, NVMe) to applications 2)Atomic transactional behavior Add atomicity and recovery to programming model Not addressed by current sync semantics 3)Remote access – Disaggregated memory – RDMA direct to NVM – High availability, clustering, capacity expansion use cases

9 RDMA Challenge Use case: – RDMA writing to persistent memory – within a memory mapped file – built on NVM.PM.FILE from version 1 programming model How can the initiator of the RDMA efficiently learn when data is persistent at the remote side? – Sync semantics for remote access – Flushing processor queues on the remote end

Discussion