1 © Violin Memory, Inc. 2014 Product Deep Dive Violin Memory 6000 Series.

Slides:



Advertisements
Similar presentations
Oracle Storage Networking Powered by QLogic Optimized for the Oracle Solaris Platform.
Advertisements

Removing the I/O Bottleneck with Virident PCIe Solid State Storage Solutions Jan Silverman VP Operations.
System Center 2012 R2 Overview
What’s New: Windows Server 2012 R2 Tim Vander Kooi Systems Architect
WS2012 File System Enhancements: ReFS and Storage Spaces Rick Claus Sr. Technical WSV316.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Cosmos Business Systems & IBM Hellas
Session Agenda Introducing the Serverquarium for 2013.
IBM® Spectrum Storage Virtualize™ V V7000 Unified in a nutshell
STORAGE Virtualization
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Product Manager Networking Infrastructure Choices for Storage.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP 3PAR StoreServ 7000.
“Better together” PowerVault virtualization solutions
© 2014 IBM Corporation IBM FlashSystem John Clifton
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
IBM Storwize v3700 More performance. More efficiency. No compromises.
Scalability Module 6.
Solutions Road Show 2014 March 2014 | India Neeraj Matiyani Director Enterprise Storage Solutions Changing the Economics of Storage: Flash at the Price.
© 2009 Oracle Corporation. S : Slash Storage Costs with Oracle Automatic Storage Management Ara Vagharshakian ASM Product Manager – Oracle Product.
1© Copyright 2013 EMC Corporation. All rights reserved. EMC XtremSW Cache Performance. Intelligence. Protection.
11 Capacity Planning Methodologies / Reporting for Storage Space and SAN Port Usage Bob Davis EMC Technical Consultant.
1 © Copyright 2009 EMC Corporation. All rights reserved. Agenda Storing More Efficiently  Storage Consolidation  Tiered Storage  Storing More Intelligently.
COMPANY AND PRODUCT OVERVIEW Russ Taddiken Director of Principal Storage Architecture.
Challenges of Storage in an Elastic Infrastructure. May 9, 2014 Farid Yavari, Storage Solutions Architect and Technologist.
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
Oracle Storage Overview Tomáš Vencelík – Storage sales leader.
XenDesktop Built on FlexPod Flexible IT Infrastructure for Desktop Virtualization.
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
An Open Source approach to replication and recovery.
The Industry’s only Unified Flash + Flash Hybrid Storage with Inline De-Duplication Hard Disk at the Speed of Flash.
Module – 4 Intelligent storage system
1EMC CONFIDENTIAL—INTERNAL USE ONLY Why EMC for SQL Performance Optimization.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
FlashSystem family 2014 © 2014 IBM Corporation IBM® FlashSystem™ V840 Product Overview.
SESSION CODE: BIE07-INT Eric Kraemer Senior Program Manager Microsoft Corporation.
Eric Burgener VP, Product Management A New Approach to Storage in Virtual Environments March 2012.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Workshop sullo Storage da Small Office a Enterprise Class Presentato da:
1 © 2002 hp Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary 23May 2003 Presentation.
Tackling I/O Issues 1 David Race 16 March 2010.
Introduction to Exadata X5 and X6 New Features
© 2014 IBM Corporation 1 Flash Landscape and Market Overview © 2014 IBM Corporation 1 Title of presentation goes here FlashSystem family 2014 Flash Landscape.
Copyright © 2010 Hitachi Data Systems. All rights reserved. Confidential – NDA Strictly Required Hitachi Storage Solutions Hitachi HDD Directions HDD Actual.
© 2014 IBM Corporation 1 Product Overview © 2014 IBM Corporation 1 Title of presentation goes here FlashSystem family 2014 FlashSystem 840/V840 Overview.
Extending Auto-Tiering to the Cloud For additional, on-demand, offsite storage resources 1.
VVols with Adaptive Flash and InfoSight Analytics 1 Manchester Virtualisation User Group Rich Fenton (Nimble North Senior Systems Engineer)
1© Copyright 2016 EMC Corporation. All rights reserved. CONNECTIVTY MATTERS FOR STORAGE TECHNOLOGY REFRESH JUNE 2016 STORAGE AREA NETWORKS IP STORAGE NETWORKS.
PernixData FVP & Architect Storage that is Fast, Scalable and Predictable Frank Brix Pedersen Systems Engineer -
1 Paolo Bianco Storage Architect Sun Microsystems An overview on Hybrid Storage Technologies.
System Storage TM © 2007 IBM Corporation IBM System Storage™ DS3000 Series Jüri Joonsaar Tartu.
EonStor DS series disk arrays
Journey to the HyperConverged Agile Infrastructure
E2800 Marco Deveronico All Flash or Hybrid system
Connectrix Storage Networking
EonStor DS 1000.
Oracle & HPE 3PAR.
Ryan Leonard Storage and Solutions Architect
EonStor DS 2000.
Video Security Design Workshop:
Sales Brief of Huawei ES3000 V2 PCIe SSD Card
Flash Storage 101 Revolutionizing Databases
Fujitsu Training Documentation RAID Groups and Volumes
Anything But, Troubleshooting when it’s not SQL Server
Windows Server* 2016 & Intel® Technologies
Tintri Smart Storage for Virtualisation & Private Cloud
VMware vRealize® Operations™ Management Pack for Pure Storage
Hyperconvergence Your Way
Presentation transcript:

1 © Violin Memory, Inc Product Deep Dive Violin Memory 6000 Series

2 © Violin Memory, Inc MORE APPLICATIONS MORE DEVICES MORE USERS Compute Network Storage Real time, concurrent data access, heavily virtualized infrastructure Multi-Core Compute that is I/O Starved, CPU waiting for I/O Storage Must Deliver High Random IOPS & Low LatencyStorage Must Deliver High Random IOPS & Low Latency More Demand for Data, Now!

3 © Violin Memory, Inc  Short stroking  Wide striping  Adding SSD to legacy array  Host side read cache  “FAST”  “Easy Tier” How Do You Make Storage Go FAST? High Acquisition CostsHigh Acquisition Costs Higher Operational CostsHigher Operational Costs High Acquisition CostsHigh Acquisition Costs Higher Operational CostsHigher Operational Costs

4 © Violin Memory, Inc ALWAYS AVAILABLEAMAZINGLY ECONOMICALINSANELY POWERFUL Best Performance Value Lower Infrastructure Costs Eliminate IO bottlenecks Drastically reduce latency Full Redundancy Built-in Fully Hot Swappable Engineered For Flash Series Flash Array

5 © Violin Memory, Inc Specification VIMM Count & VIMM Capacity24x 512GiB24x 1TiB64x 512GiB64x 1TiB24x 256GiB64x 256GiB Form Factor / Flash type3U / Capacity (MLC)3U / Performance (SLC) Raw Capacity (TiB / TB)12 / 1324 / / 3564 / 706 / / 17.5 Usable Capacity 84% / 65%)6.5 / 513 / 1020 / / 313 / / 7.5 I/O Connectivity8Gb FC, 10GbE iSCSI, 40 Gb IB, PCIe G2 Maximum 4KB IOPS (Mixed)200K IOPS350K IOPS500K IOPS 750K IOPS 450K IOPS1M IOPS Maximum Bandwidth (100% Reads)1.5GB/s2GB/s4GB/s 3GB/s4GB/s Nominal Latency500 µsec (mixed)250 µsec (mixed)

6 © Violin Memory, Inc NO MORE “IO WAIT” 1 Million IOPS, Latency in μsec FAST BY DEFAULT No Tuning Needed Insanely Powerful. YSGet Your Storage on Moore’s Law Curve SUSTAINED EXTREME PERFORMANCE Scale without Fear

7 © Violin Memory, Inc Architecture Fundamentals: Violin Memory OS (vMOS)  System Operations -Web, CLI, REST  System Management -Storage virtualization -Hardware acceleration -Multi-Level Flash Optimization  Data Management -Snapshots, Clones -Thin Provisioning -Encryption -Deduplication* -Replication*

8 © Violin Memory, Inc vMOS – Violin Memory Operating System SYSTEM OPERATIONSSYSTEM MANAGEMENT DATA MANAGEMENT System-wide wear leveling Self-healing, integrated RAID Multi-level wide striping Die and block failure handling Efficient garbage collection LUN Management Multi-Pathing High-Availability Clustering Proactive health monitoring SNMP, CLI, UI, REST API Snapshots Clones Full-disk encryption Thin Provisioning Space management

9 © Violin Memory, Inc Engineered For Performance & Reliability

10 © Violin Memory, Inc Engineered For Performance & Reliability  Flash memory fabric -Heart of the system -4x vRAID Control Modules (VCM)  Array control modules -Fully redundant -Controls flash memory fabric -System level PCIe switching  Active/Active memory gateways -Storage virtualization -LUN configuration  IO modules -FC, 10GE, IB, PCIe Interfaces M EMORY G ATEWAYS IO M ODULES F LASH M EMORY F ABRIC 24 to 64 Hot Swappable VIMMs A RRAY C ONTROL M ODULES

11 © Violin Memory, Inc Multi-Level Redundancy – Hot-Swap Anything  Fans (x6)  Power Supply (x2)  VIMM (60+4 hot spares)  vRAID Controllers (x4)  Array Controllers (x2)  Memory Gateways (x2)

12 © Violin Memory, Inc Flash Memory Fabric – Up to 1 Million IOPS  Up to 64 Violin Intelligent Memory Modules -PCIe connected -Fully hot swappable -4 global spares  4 Active-Active vRAID Control Modules  Fabric level flash optimization -vRAID patented algorithms -Dynamic wear leveling -Multi-level Error Correction Code -Hardware based garbage collection  Performance optimization -Dynamic data wide stripping -Flash erase hiding -VIMM failure protection Flash Memory Fabric VCM Memory Gateway

13 © Violin Memory, Inc No SSDs ─ Violin Intelligent Memory Modules  Core building block of the Memory Fabric -256 GB SLC Flash -512 GB / 1024 GB MLC Flash -3GB to 8GB DRAM All flash metadata & write I/O buffering  Hot Swappable  Proprietary flash endurance & wear leveling extending Flash life up to 10x -Continuous data scrubbing -Advanced hardware based ECC -Automated in-place die failure handling

14 © Violin Memory, Inc System Level Automatic Data Placement Optimization  By default, each VCM controls 15 VIMMs -3 VIMM Protection Groups -Each comprising 5 VIMMs  Data is dynamically placed on VIMMs  Example of an incoming 4KB write -Received by MG -Forwarded to a VCM -4KB split in (4*1KB + 1 Parity) writes across 5 VIMMs in a protection group  Any VIMM failure triggers activation of a VIMM global spare and vRAID rebuild VCM VIMM VCM VIMM Protection Group P MG 4KB Write

15 © Violin Memory, Inc Every LUN Capable of Up to 1Million IOPS, By Default  Full system bandwidth available for every LUN  Automatic multi-level striping -Gateways to VCMs -VCM wide striping to VIMMs -VIMM wide striping on internal Flash Chips  All operations implemented in hardware, at line speed, ensuring lowest levels of latency VCM

16 © Violin Memory, Inc Low Level Flash Operations Can Lead to Poor I/O Latency  Read latency is low  Write is 10x to 20x longer than read  Erase is 100x longer than read Read OpsWrite OpsErase Ops SLC25µs250µs1,000µs e-MLC50µs1,500µs5,500µs MLC50µs900µs3,000µs Spike free low latency requires special handling of Erase operations

17 © Violin Memory, Inc Write Cliff Affects All Flash Solutions To Some Degree  New Write operations get queued behind Erase operations  Up to 60% performance drop  Real issue is that Erase operations also get in the way of Read operations  Mitigating or eliminating the Write Cliff requires special flash management logic Transient Random Write Bandwidth Degradation Source: Nersc “Write Cliff”

18 © Violin Memory, Inc Patented Algorithms Deliver Spike Free Low Latency  Background garbage collection ensures free pages for all incoming writes  Garbage collection implemented in hardware within each VIMM for line rate performance  Garbage collection tightly scheduled & orchestrated at the system level to not affect system performance  Garbage collection allowed one VIMM per Protection Group at a time VCM

19 © Violin Memory, Inc Patented Algorithms Deliver Spike Free Low Latency  Background garbage collection ensures free pages for all incoming writes  Garbage collection implemented in hardware within each VIMM for line rate performance  Garbage collection tightly scheduled & orchestrated at the system level to not affect system performance  Garbage collection allowed one VIMM per Protection Group at a time VCM

20 © Violin Memory, Inc vRAID Erase Hiding In Action  Reads never blocked by garbage collection (vRAID rebuild on remaining 4 VIMMs)  System level orchestration enables sustained low latency for mixed workloads VCM P 4KB Read vRAID Rebuild

21 © Violin Memory, Inc World Record Breaking Performance  June 29, TPC-E World Record  May 9, TPC-C World Record  May 23, TPC-C World Record  June 22, 2011 – File System World Record  December 8, TPC-C World Record  September 12, 2012 – VMmark 2.1 World Record  September 18, 2012 – VMmark 2.1 World Record  September 27, 2012 – TPC-C World Record  October 02, 2012 – VMmark 2.1 World Record  November 13, 2012 – VMmark 2.1 World Records (5 of them)

22 © Violin Memory, Inc R EDUCE S TORAGE C OSTS BY 7 X C OMPARED TO D ISK Never Overprovision U NMATCHED O PERATIONAL C OST Plug and play experience N EAR I NSTANT ROI Optimize Server and License Costs Amazingly Economical. Reduce Cost Across Your Infrastructure

23 © Violin Memory, Inc Storage Cost Per Application Is What Matters Database Requirements 1TB 20K IOPS & Tier 1 Disk Array - High Performance HDD - $4/Raw GB IOPS per disk - 146GB per disk Violin Memory Flash Memory - $5/Raw GB | $8.5/Usable GB - vRAID - 750k IOPS for any size LUN 1TB 750K IOPS 100 Disks * 146 = 14.6 Raw TB 20K IOPS

24 © Violin Memory, Inc Application Owners Pay 7x Less on Violin Database Requirements 1TB 20K IOPS & Violin Memory 6264 Tier 1 Disk Array Raw TB - $4/Raw GB - 1 Usable TB - $8.5 / Usable GB $58,400 For This Database $8,500 For This Database Application Storage Costs is 7x Lower With Violin!

25 © Violin Memory, Inc Simple Operations  Provision storage and Go! -Select LUN capacity and let vRAID automate placement -No tuning required -Hot swap for non disruptive operations  Seamlessly handle performance spikes -Customer example: Rogue full table scans in dba scripts System handled the load spikes and still met core application SLAs  Advanced Graphical User Interface -Fully customizable dashboard -Detailed performance statistics -Supported as a vCenter Plug-In

Violin Memory Inc. Proprietary 26

Violin Memory Inc. Proprietary 27

28 © Violin Memory, Inc Violin Symphony: Manage PB’s in a Flash! Manage 100’s of Violin flash arrays through a single interface Enable multi-tenancy with role based access control and Smart Groups Share information through custom reports with up to 2 years of historic data Achieve pro-active wellness with advanced health & SLA monitoring Personalize visibility through fully customizable dashboards and gadgets

29 © Violin Memory, Inc Eliminate “I/O Wait”; Reduce HW & SW Costs the Speed of Memory More Ops/Sec With Less CPU Cores More Ops/Sec with Less DRAM Cache Less Software Licenses I/O Wait CPU Cycle with Magnetic Disk: 80% Wait 20% Work t CPU Cycle with Memory Storage: 5% Wait 95% Work t

30 © Violin Memory, Inc VMworld 1 Million IOPS – 2011 vs Engines, 960 drives 1 Million Read IOPS 5 Racks or 210RU – 32,000 Watts 2 Violin 6616 Memory Arrays 1 VM at 1 Million IOPS (Random R/W Mix) 6 RU (97% less) – 3,600 Watts (90% less)

31 © Violin Memory, Inc Bringing the speed of Flash to all your applications T ARGETING A LL A PPLICATIONS  Enterprise Applications on Legacy SAN  Enterprise Applications on Memory SAN  Scale Out Applications D ISRUPTIVE E CONOMICS  Reduced Capex  Streamlined Opex  Ready for Petabyte Scale

32 © Violin Memory, Inc Back-up slides

33 © Violin Memory, Inc Violin 6264: A New Standard in Performance Economics HIGHER EFFICIENCY BETTER ECONOMICS SAME FOOTPRINT DOUBLE CAPACITY Violin 6232Violin X 3X

34 © Violin Memory, Inc Violin 6264 Flash Memory Array at a Glance 50% Lower Power 750K IOPS (Peak 70:30) Memory Disk $/GB 19nm Process Geometry 64 TiBs of Capacity in the Same 3U Form Factor

35 © Violin Memory, Inc Comparing 6264 and 6232 Hardware & Software  6264 requires 250W less than W for 6264 versus 1750W for Result of more power efficient VIMM hardware design  6264 specific hardware improvements -New 1TiB MLC VIMMs -New chassis with better cable management -FC, iSCSI and IB configurations come with a new ACM Internal clustering for vMOS 6 40GE native port – for future use, enabling data movement across arrays -PCIe configuration leverages same ACM as 6232  6264 requires Array Firmware 6.2 and above -Memory Gateway software is equivalent functionality to vMOS 6.0 -Memory Array firmware adds resilience and support for new ACMs and VIMMs  vMOS 6.3 will support all 6600 and 6200 Series arrays

36 © Violin Memory, Inc Array Control Module with 40Gbps Ports

37 © Violin Memory, Inc Back panel view – 6264 FC/iSCSI/IB – New ACM