Switched Storage Architecture Benefits Computer Measurements Group November 14 th, 2002 Yves Coderre.

Slides:



Advertisements
Similar presentations
Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Advertisements

Introduction to Storage Area Network (SAN) Jie Feng Winter 2001.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
Denny Cherry Manager of Information Systems MVP, MCSA, MCDBA, MCTS, MCITP.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Host and Storage System Environment
Intelligent Storage Systems
COEN 180 NAS / SAN. NAS Network Attached Storage (NAS) Each storage device has its own network interface. Filers: storage device that interfaces at the.
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
COEN 180 NAS / SAN. Storage Trends Storage Trends: Money is spend on administration Morris, Truskowski: The evolution of storage systems, IBM Systems.
Storage Networking. Storage Trends Storage growth Need for storage flexibility Simplify and automate management Continuous availability is required.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
“Better together” PowerVault virtualization solutions
Mass Storage System EMELIZA R. YABUT MSIT. Overview of Mass Storage Structure Traditional magnetic disks structure ◦Platter- composed of one or more.
© 2014 IBM Corporation IBM FlashSystem John Clifton
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
The Journey to the 2012 R2 wave The 2012 R2 wave File Based Storage Storage Management for Private Cloud Storage Spaces.
11 Capacity Planning Methodologies / Reporting for Storage Space and SAN Port Usage Bob Davis EMC Technical Consultant.
1 © Copyright 2009 EMC Corporation. All rights reserved. Agenda Storing More Efficiently  Storage Consolidation  Tiered Storage  Storing More Intelligently.
Sponsored by: PASS Summit 2010 Preview Storage for the DBA Denny Cherry MVP, MCSA, MCDBA, MCTS, MCITP.
Module – 2 Data center environment
Module 10 Configuring and Managing Storage Technologies.
Baydel Founded in 1972 Headquarters: Surrey, England North American Headquarters: San Jose, CA Engineering Driven Organization Specialize in Computer Storage.
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
ISDMR:BEIT VIII: CHAP2:MADHU N 1 ISM - Course Organization.
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
Storage Area Network Presented by Chaowalit Thinakornsutibootra Thanapat Kangkachit
Enterprise Storage A New Approach to Information Access Darren Thomas Vice President Compaq Computer Corporation.
Chapter 5 Section 2 : Storage Networking Technologies and Virtualization.
Module – 4 Intelligent storage system
DAC-FF The Ultimate Fibre-to-Fibre Channel External RAID Controller Solution for High Performance Servers, Clusters, and Storage Area Networks (SAN)
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
 International Computers Limited, 2001 AXiS 20/3/2001 AXiS Trimetra User Group Glenn Fitzgerald Manager, Storage Solutions ICL High Performance Systems.
FlashSystem family 2014 © 2014 IBM Corporation IBM® FlashSystem™ V840 Product Overview.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Chapter 12: Mass-Storage Systems Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 Chapter 12: Mass-Storage.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
File Based Storage Block Storage VHDX Guest Clustering Guest Clustering with commodity storage Sharing VHDX files provides shared.
1 © Copyright 2011 EMC Corporation. All rights reserved. BIG DATA & Storage Automation George Kokkinakis Enterprise Account Manager.
Business and Partnering Opportunities: “Windows Server 8” Continuous Availability Designing Systems for Continuous Availability and Scalability Session.
Csci 136 Computer Architecture II – IO and Storage Systems Xiuzhen Cheng
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
EMC Proven Professional. Copyright © 2012 EMC Corporation. All Rights Reserved. NAS versus SAN NAS – Architecture to provide dedicated file level access.
STORAGE ARCHITECTURE/ MASTER): Disk Storage: What Are Your Options? Randy Kerns Senior Partner The Evaluator Group.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
© 2011 IBM Corporation Product positioning Jana Jamsek, ATS Europe.
Storage Virtualization
Page 1 Mass Storage 성능 분석 강사 : 이 경근 대리 HPCS/SDO/MC.
Storage Networking. Storage Trends Storage grows %/year, gets more complicated It’s necessary to pool storage for flexibility Intelligent storage.
Tackling I/O Issues 1 David Race 16 March 2010.
Maximizing Performance – Why is the disk subsystem crucial to console performance and what’s the best disk configuration. Extending Performance – How.
July 30, 2009opsarea meeting, IETF Stockholm1 Operational Deployment and Management of Storage over the Internet David L. Black, EMC IETF opsarea meeting.
Start out with questions? Does anyone work with Databases? Has anyone ever had their computer slow down, to a crash? Would it be more beneficial for your.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
FusionCube At-a-Glance. 1 Application Scenarios Enterprise Cloud Data Centers Desktop Cloud Database Application Acceleration Midrange Computer Substitution.
CMSC 611: Advanced Computer Architecture I/O & Storage Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Ryan Leonard Storage and Solutions Architect
Video Security Design Workshop:
Direct Attached Storage and Introduction to SCSI
Storage Networking.
Hitachi Data Systems Network Storage Systems
VNX Storage Report Project: Sample VNX Report Project ID:
Unity Storage Array Profile
Storage Networking.
Storage System Environment
Chapter 11: Mass-Storage Systems
Presentation transcript:

Switched Storage Architecture Benefits Computer Measurements Group November 14 th, 2002 Yves Coderre

Evolution of Technology

Disk Technology

RAID Technology ” 1GB3600 RPM ” 3-9GB5400 RPM 1996Various18-36GB7200 RPM 1998Various72GB10K RPM 2000Various180GB15K RPM

IOPS Measurements Rotational Speed Seek and Latency Linear and Spatial density RAID Protection Read/Write ratio Cache Hits

Theoretical Calculation Theoretical IOPS of a Spindle IOPS = 1000/(Average Seek + Latency) Average Seek =  (Ws + Rs)/2 Latency (ms) =  (1000/RPS)/2 Computes to 2.99ms for 10,025 RPM Drives Computes to 2.00ms for 15,00 RPM Drives Ex: 1000/(5.7ms ) =  115 IOPS

Practical Calculation Accounting for R/W Ratio & Read Hits IOPS = 1000/[(Rs+L)*Rm*Read% + (Ws+L)*Write%] Taking into account the # of Spindles/Raid Group, the Raid Penalty and type of workload, one can easily Calculate the #of Spindles required to process a given Number of IOPS for a given workload type.

Sample Calculation 10,000 IOPS, 3/1 R/W 70% Read Hits, 100% Spindle Busy 10K RPM Drives (Rd Seek 5.2ms, Wr Seek 6.0ms) RAID 5 (3+1): 16 Array Groups (64 Drives) RAID 1 (2+2): 13 Array Groups (52 Drives)

Sample Calculation 10,000 IOPS, 3/1 R/W 70% Read Hits, 100% Spindle Busy 10K RPM Drives (Rd Seek 5.2ms, Wr Seek 6.0ms) RAID 5 (3+1): 16 Array Groups (64 Drives) RAID 1 (2+2): 13 Array Groups (52 Drives) 15K RPM Drives (Rd Seek 3.9ms, Wr Seek 4.5ms) RAID 5 (3+1): 11 Array Groups (44 Drives) RAID 1 (2+2): 10 Array Groups (40 Drives)

Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec

Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec 1996Fibre Channel100 MB/Sec 1998Fibre Channel200 MB/Sec

Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec 1996Fibre Channel100 MB/Sec 1998Fibre Channel200 MB/Sec 2000FICON100 MB/Sec 2002FICON200 MB/Sec

Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec

Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec ESCON544 MB/Sec Fibre3.2 GB/Sec

Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec ESCON544 MB/Sec Fibre3.2 GB/Sec FICON3.2 GB/Sec FICON6.4 GB/Sec

Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB

Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB 1993ICDA512GB 1995ICDA1TB

Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB 1993ICDA512GB 1995ICDA1TB 1997RAID Subsystems5TB 2000RAID Subsystems75TB

IO Intensity Factors Disk Technology 5 MB to 180 GB Capacity 3600 to 15,000 RPM RAID Technology 5.25 ” to 3.5 ” to 1 ” (1GB to 180GB)

IO Intensity Factors Disk Technology 5 MB to 180 GB Capacity 3600 to 15,000 RPM RAID Technology 5.25 ” to 3.5 ” to 1 ” (1GB to 180GB) Channel Bandwidth & Connectivity 3.5 MB/Sec to 200MB/Sec, 64 Ports Disk Subsystems evolution 1 GB to 100 TB High Performance Subsystem

Growth Trends Demand for bandwidth is growing faster than capacity requirements

Shared Bus Architecture

Switch Architecture 2000 “(…) the most innovative technology), which built a SAN rather than a backbone bus into its Storage Sub-Systems to deliver exceptional performance and capacity flexibility.” Bob Zimmerman, Giga Group “The company’s new Switch Architecture further demonstrated their commitment to technological innovation and business- enabling solutions, and redefines the industry standard, once again.” Jack Scott, Evaluator Group, Inc.

Switched Fabric Architecture 3.2GB/s Control 3.2GB/s Control 3.2GB/s Data 3.2GB/s Data 100 Mhz x 2 Bytes = 200MB/Sec 200MB/Sec x 16 Paths = 3.2GB/Sec

Switch Architecture64GBCache 32 Hosts Connections: FC, Escon, FICON, iSCSI, NAS 32 Cache Connections 5 GB/s Bandwidth 5 GB/s Bandwidth Shared Memory - HSN 1) 4 paths / (CHA/DKA) 2) 32 paths / SM(Each side) Frequency : 166MHz Cache-HSN 1) 2 paths / (CHA/DKA) 2) 8 paths / (CSW for CHA/DKA) 3) 8 paths / (CSW for Cache) 4) 8 paths / (Cache) 5) 32 paths / DKC(CSW-Cache) 6) 16 paths / Cluster(CSW-Cache) 7) 32 paths / DKC (CHA/DKA-CSW) 8) 16 paths / Cluster (CHA/DKA-CSW) Frequency : 166MHz Up to 32 FC-AL backend paths 166 Mhz x 2 Bytes = 332MB/Sec 332MB/S x 32 Paths = 10.6GB/Sec Data Bandwidth Control

Paradigm Shift

Tangible Benefits Reduced Total Cost of Ownership Enables Massive Consolidation & Centralization Reduced complexity by simplifying storage networking environments with fewer switches, connections Simplified management Simplified and automated tools reduces time spend managing storage: people can be re-deployed for other tasks. Reduced software licensing and maintenance Through improved capacity utilization: less capacity then lower licensing and maintenance –One 6TB versus three 4TB –$700K plus Improved Environmental Costs Reduced floor space, power, cooling

Network Management Requires Open Standards-Based Approach Exchanging APIs leads to a growing web of proprietary interfaces Storage networks require an object-based Common Information Model (CIM), for management of mixed environments Web-Based Enterprise Management (WBEM), provides a standard management interface for existing Web servers CIM/WBEM is an industry accepted specification that provides a truly open and adaptive standard for heterogeneous storage management Software vendors write to an open interface No need for proprietary commitments Hardware vendors provide a common object- based management interface that still enables them to provide differentiation IHV 1 ISV 1 ISV 2 ISVn IHV 2 IHVn CIM ISV1ISV2ISVn IHV1IHV2IHVn CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn

The Importance of a Message Bus A CIM object enables ISVs to code to a common interface However, ISVs still need to communicate with each other to reduce management complexity A Simple Object Access Protocol (SOAP) message bus provides a standard interface for communication between ISV products New Application Framework should be based on a CIM/SOAP management message bus. CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn Management Message Bus: CIM/SOAP

High Performance, Open Computing Computer Measurements Group Thank You Yves Coderre