Disruptive Storage Workshop Lustre Hardware Primer

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
PANEL Session : The Future of I/O from a CPU Architecture Perspective #OFADevWorkshop.
Zeus Server Product Training Son Nguyen Zeus Server Product Training Son Nguyen.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
The Efficient Fabric Presenter Name Title. The march of ethernet is inevitable Gb 10Gb 8Gb 4Gb 2Gb 1Gb 100Mb +
Dell IT Innovation Labs in the Cloud “The power to do more!” Andrew Underwood – Manager, HPC & Research Computing APJ Solutions Engineering Team.
Microsoft Private Cloud Fast Track: The Next Generation of Private Cloud Reference Architecture Mike Truitt Sr. Product Planner Bryon Surace Sr. Program.
Module 9 PS-M4110 Overview <Place supporting graphic here>
Understand what’s new for Windows File Server Understand considerations for building Windows NAS appliances Understand how to build a customized NAS experience.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
EVOLVING TRENDS IN HIGH PERFORMANCE INFRASTRUCTURE Andrew F. Bach Chief Architect FSI – Juniper Networks.
Net Optics Confidential and Proprietary Net Optics appTap Intelligent Access and Monitoring Architecture Solutions.
Reliable Datagram Sockets and InfiniBand Hanan Hit NoCOUG Staff 2010.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
HPCS Lab. High Throughput, Low latency and Reliable Remote File Access Hiroki Ohtsuji and Osamu Tatebe University of Tsukuba, Japan / JST CREST.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
© 2012 MELLANOX TECHNOLOGIES 1 The Exascale Interconnect Technology Rich Graham – Sr. Solutions Architect.
The NE010 iWARP Adapter Gary Montry Senior Scientist
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Mike Truitt Sr. Product Planner Bryon Surace Sr. Program Manager
March 9, 2015 San Jose Compute Engineering Workshop.
Switched Storage Architecture Benefits Computer Measurements Group November 14 th, 2002 Yves Coderre.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
Copyright © 2011 Intel Corporation. All rights reserved. Openlab Confidential CERN openlab ICT Challenges workshop Claudio Bellini Business Development.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Use case of RDMA in Symantec storage software stack Om Prakash Agarwal Symantec.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
2014 Redefining the Data Center: White-Box Networking Jennifer Casella October 9, 2014 #GHC
ICX 7750 Distributed Chassis for Campus Aggregation/Core
Peter Idoine Managing Director Oracle New Zealand Limited.
E2800 Marco Deveronico All Flash or Hybrid system
The Efficient Fabric Presenter Name Title.
Ryan Leonard Storage and Solutions Architect
High Performance Interconnects: Landscape, Assessments & Rankings
Balazs Voneki CERN/EP/LHCb Online group
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
DSS-G Configuration Bill Luken – April 10th , 2017
EonNAS.
AIC/XTORE SAS OVERVIEW
The demonstration of Lustre in EAST data system
Napatech Acceleration Platform
iSCSI Application to Cam Coder - A feasibility Study
Appro Xtreme-X Supercomputers
Cluster Active Archive
OCP: High Performance Computing Project
Building 100G DTNs Hurts My Head!
Infrastructure for testing accelerators and new
The Brocade Cloud Manageability Vision
Low Latency Analytics HPC Clusters
Marrying OpenStack and Bare-Metal Cloud
Network Attached Storage NAS100
Footer.
Ceph Appliance – SAFE Storage Appliance For Enterprise
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Dell ™ PowerEdge R320 E GB 1TB:- 57,900 Baht
OpenStack for the Enterprise
A Closer Look at NFV Execution Models
Presentation transcript:

Disruptive Storage Workshop Lustre Hardware Primer 12/3/2017 Disruptive Storage Workshop Lustre Hardware Primer Hello everyone, are you confused with all the choices and recent technology disruptions, with almost weekly announcements of new tools, software, and hardware that promise to solve all of your technology challenges? Are you searching for a solutions integrator that will provide you more than cookie cutter solutions? Interested in open-source and not being locked into a proprietary stack with high maintenance fees? What about working with a provider that truly places your organization’s mission and business initiatives first? Look no further than Silicon Mechanics, we are an Open-Technology Solutions Provider that has been in business for over 15 years. My name is XXXX, and I proudly serve as the XXX of Silicon Mechanics, were we believe in using Open Source technology platforms to empower organizations through innovation. Don Tanner Director of Pre-Sale Engineering / Storage Architect July 2017 © 2015 Silicon Mechanics Inc.

CONFIDENTIAL | Silicon Mechanics Session Goals Metadata and Manager Servers Object Storage Servers Differences/Latency of Infrastructure InfiniBand vs OmniPath vs Ethernet Networking Options for overcoming the barrier to entry CONFIDENTIAL | Silicon Mechanics

Scalable Building Block Architecture

CONFIDENTIAL | Silicon Mechanics Metadata and Manager Servers CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Metadata and Manager Servers MDS/MGS CPU/Memory Dual E5-2667.v4, 3.2Ghz, 8 Core 128 GB Fabric One FDR InfinBand per node (2x total) Other fabrics can be used Storage Interconnect 12Gb/s (SAS3) Avago Syncro Hardware RAID Controllers Metadata Target 2U – 24 bay 2.5” drive JBOD with 10K/15K spinning drives CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Object Storage Server (OSS) and Object Storage Target (OST) CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Scalability and Flexibilty CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Object Storage Server (OSS) and Object Storage Target (OST) MDS/MGS CPU/Memory Dual E5-2667.v4, 3.2Ghz, 8 Core 128 GB Fabric One FDR InfinBand per node (2x total) Other fabrics can be used Object Data Target 4U –  90 3.5" drive JBOD(s) with shared SAS 3 redundant expander CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Object Storage Server (OSS) and Object Storage Target (OST) CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Scalability and Flexibilty CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Object Storage Server (OSS) and Object Storage Target (OST) MDS/MGS CPU/Memory Dual E5-2667.v4, 3.2Ghz, 8 Core 128 GB Fabric One FDR InfinBand per node (2x total) Other fabrics can be used Metadata Target 4U – 44 bay 3.5” drive JBOD with redundant SAS3 expanders CONFIDENTIAL | Silicon Mechanics

CONFIDENTIAL | Silicon Mechanics Networking Choices CONFIDENTIAL | Silicon Mechanics

Intel OmniPath Architecture Networking Choices Mellanox Infiniband 100Gbps, 56Gbps, 40Gbps, ~1 μs Latency with CPU offload options Intel OmniPath Architecture 100Gbps ~1 μs Latency with no CPU offload options Ethernet – Mellanox, Brocade, Juniper, Quanta 100Gbps, 50Gbps, 40Gbps, 25Gbps, 10Gbps ~3 μs Latency with no CPU offload options

100Gb 56Gb 50Gb 40Gb 25Gb 10Gb Infiniband X OmniPath Ethernet 12.5GB/sec 6.25GB/sec 5GB/sec 3.13GB/sec

Mellanox Ethernet 100Gbps/32 port Quanta Ethernet 100Gbps/48 port Networking Choices Network Speed Switch Port Cost Adapter Port Cost Connection Cost Total Infiniband 36 port 100Gbps $725 $1650 $2375 OmniPath 48 port $425 $960 $1385 Mellanox Ethernet 100Gbps/32 port 25Gbps NIC $1025 $540 $1565 Quanta Ethernet 100Gbps/48 port $230 $770

Thank you for your time. 12/3/2017 We really appreciate your time and hope you now have a better idea of the value that we will bring to [Name of org] Any further questions? © 2015 Silicon Mechanics Inc.