Accelerating Applications with NVM Express™ Computational Storage 2019 NVMe™ Annual Members Meeting and Developer Day March 19, 2019 Prepared by Stephen.

Slides:



Advertisements
Similar presentations
Scalable Multi-Access Flash Store for Big Data Analytics
Advertisements

Challenges in Getting Flash Drives Closer to CPU Myoungsoo Jung (UT-Dallas) Mahmut Kandemir (PSU) The University of Texas at Dallas.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Zadara Storage Confidential Consistent High Performance Secure & Private End-Customer Managed Enterprise-Class Charge Hourly.
Embedded Transport Acceleration Intel Xeon Processor as a Packet Processing Engine Abhishek Mitra Professor: Dr. Bhuyan.
The SNIA NVM Programming Model
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
Windows Server Scalability And Virtualized I/O Fabric For Blade Server
IWARP Ethernet Key to Driving Ethernet into the Future Brian Hausauer Chief Architect NetEffect, Inc.
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
NVM Programming Model. 2 Emerging Persistent Memory Technologies Phase change memory Heat changes memory cells between crystalline and amorphous states.
New Direction Proposal: An OpenFabrics Framework for high-performance I/O apps OFA TAC, Key drivers: Sean Hefty, Paul Grun.
Windows Server 2012 VSP Windows Kernel Applications Non-Hypervisor Aware OS Windows Server 2008, 2012 Windows Kernel VSC VMBus Emulation “Designed for.
How to construct world-class VoIP applications on next generation hardware David Duffett, Aculab.
The NE010 iWARP Adapter Gary Montry Senior Scientist
NVMe & Modern PC and CPU Architecture 1. Typical PC Layout (Intel) Northbridge ◦Memory controller hub ◦Obsolete in Sandy Bridge Southbridge ◦I/O controller.
March 9, 2015 San Jose Compute Engineering Workshop.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
Rick Claus Sr. Technical Evangelist,
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Under the Hood with NVMe over Fabrics
2014 Redefining the Data Center: White-Box Networking Jennifer Casella October 9, 2014 #GHC
2015 Storage Developer Conference. © Intel Corporation. All Rights Reserved. RDMA with PMEM Software mechanisms for enabling access to remote persistent.
PernixData FVP & Architect Storage that is Fast, Scalable and Predictable Frank Brix Pedersen Systems Engineer -
OFA OpenFabrics Interfaces Project Data Storage/Data Access subgroup October 2015 Summarizing NVM Usage Models for DS/DA.
Computer Networks Laboratory project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego. Gurewitz Omer. Students: Cohen Erez.
An open source user space fast path TCP/IP stack and more…
Communication Needs in Agile Computing Environments Michael Ernst, BNL ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo May.
Ottawa Linux Symposium Christoph Lameter, Ph.D. Technical Lead Linux Kernel Software Silicon Graphics, Inc. Extreme High.
Microsoft Build /9/2017 5:00 AM © 2016 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY,
Instructor Materials Chapter 7: Network Evolution
Video Security Design Workshop:
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
OCP Hardware Management
Current Generation Hypervisor Type 1 Type 2.
Alternative system models
Using non-volatile memory (NVDIMM-N) as block storage in Windows Server 2016 Tobias Klima Program Manager.
Persistent Memory over Fabrics
OCP: High Performance Computing Project
HPE Persistent Memory Microsoft Ignite 2017
Introduction to Networks
RDMA Extensions for Persistency and Consistency
Xen Summit Spring 2007 Platform Virtualization with XenEnterprise
Low Latency Analytics HPC Clusters
Tushar Gohad, Intel Moshe Levi, Mellanox Ivan Kolodyazhny, Mirantis
Enabling the NVMe™ CMB and PMR Ecosystem
Rob Davis, Mellanox Ilker Cebeli, Samsung
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
Computer software.
QNX Technology Overview
OpenFabrics Alliance An Update for SSSI
Shadow: Scalable and Deterministic Network Experimentation
Chapter 2: System Structures
Storage Networking Protocols
Blockchain technology at Change Healthcare
Characteristics of Reconfigurable Hardware
Open vSwitch HW offload over DPDK
Integrating DPDK/SPDK with storage application
Windows Virtual PC / Hyper-V
Open Source Activity Showcase Computational Storage SNIA SwordfishTM
NVMe.
Microsoft Virtual Academy
Microsoft Virtual Academy
Factors Driving Enterprise NVMeTM Growth
Leveraging NVMe-oFTM for Existing and New Applications
Openstack Summit November 2017
Presentation transcript:

Accelerating Applications with NVM Express™ Computational Storage 2019 NVMe™ Annual Members Meeting and Developer Day March 19, 2019 Prepared by Stephen Bates, CTO, Eideticom & Richard Mataya, Co-Founder & EVP, NGD Systems

Agenda What?? Why?? Who?? How??

WHAT??

NVMe™ is a transport Michael Corwell, GM Storage, Microsoft Azure, Dec 5th 2018

One Driver to Rule Them All?! NVMe™ has been incredibly successful as a storage protocol. Also being used for networking (NVMe-oF™ and things like AWS Nitro and Mellanox’s Sexy NVMe Accelerator Platform (SNAP)). Why not extend NVMe to compute and make it the one driver to rule them all?

What is Computational Storage? SNIA has Defined the Following Computational Storage Drive (CSD): A component that provides persistent data storage and computational services Computational Storage Processor (CSP): A component that provides computational services to a storage system without providing persistent storage Computational Storage Array (CSA): A collection of computational storage drives, computational storage processors and/or storage devices, combined with a body of control software

WHY??

Real question is “Why not NVMe?” Why NVMe™? Accelerators require: Low latency High throughput Low CPU overhead Multicore awareness Management at scale QoS awareness NVMe provides: Low latency High throughput Low CPU overhead Multicore awareness Management at scale QoS awareness Real question is “Why not NVMe?”

Let’s Go Fishing for Data

NVMe™ Computational Storage CPU NVMe based Computational Storage Processor (CSP) advertises zlib compression. Operating System detects the presence of the NVMe CSP Used by the device-mapper to offload zlib compression to NoLoad. This can be combined with p2pdma to further offload IO. With standardization this can be vendor- neutral and upstreamed. DRAM PCIe Subsystem . . . CMB NVMe CSP NVMe SSDs

NVMe-oF™ Computational Storage An NVMe™ CSP is represented as an NVMe Computation Namespace. Therefore it can be exposed over Fabrics. Compute nodes can borrow CSPs, CSDs and standard NVMe SSDs via fabrics from Computational Storage Arrays (CSAs). NVMe Computational Storage can use the same fabrics commands that are used by legacy NVMe-oF. Application code is identical regardless Computation is local (PCIe) or remote (Fabrics) Ethernet TOR Switch Compute Node Compute Node Compute Node Computational Storage Array NVMe CSPs, CSDs and SSDs

Example of a Hadoop Cluster - In-Situ Processing Ability to Migrate Data Nodes into drives Allow for user to reduce CPU Core count Current example:

Network Monitoring Offload

WHO?

SNIA Computational Storage TWG

HOW?

NVMe™ for Computation: Software Management nvme-cli nvme-of Applications Userspace libcsnvme SPDK OS Hardware NVMe CSPs, CSDs and CSAs 18

NVMe™ for Computation: Standards NVMe Computation Namespaces: A new namespace type with its own namespace ID, command set and admin commands. Operating Systems can treat these namespaces different to storage namespaces. Fixed Purpose Computation: Some computation can be defined a way that an Operating System can consume it directly (e.g. zlib compression tied into the crypto API in Linux). General Purpose Computation: Some Computation Namespaces will be flexible and can be programmed and used in user-space (/dev/nvmeXcsY anyone?) NVMe Computation over Fabrics: User-space does not know or care if /dev/nvmeXcsY is local (PCIe) or remote (Fabrics)

There are many paths to Computational Storage

Processor Path in an NGD Systems NVMe™ SSD It’s an NVMe SSD at the core No impact on host read/write No impact on NVMe driver Standard protocols But then there is MORE (Patented IP) Dedicated compute resources HW acceleration for data analytics Seamless programming model Scalable

+ computation = awesome Call to Arms! If this all sounds interesting, please join the SNIA Computational Storage TWG. End-users and software people are needed! If you have thoughts on how you would consume NVMe™ Computation, please let us know As SNIA starts interfacing with NVMe please participate in the TPAR/TP discussions! + computation = awesome

Questions?