Switched Storage Architecture Benefits Computer Measurements Group November 14 th, 2002 Yves Coderre
Evolution of Technology
Disk Technology
RAID Technology ” 1GB3600 RPM ” 3-9GB5400 RPM 1996Various18-36GB7200 RPM 1998Various72GB10K RPM 2000Various180GB15K RPM
IOPS Measurements Rotational Speed Seek and Latency Linear and Spatial density RAID Protection Read/Write ratio Cache Hits
Theoretical Calculation Theoretical IOPS of a Spindle IOPS = 1000/(Average Seek + Latency) Average Seek = (Ws + Rs)/2 Latency (ms) = (1000/RPS)/2 Computes to 2.99ms for 10,025 RPM Drives Computes to 2.00ms for 15,00 RPM Drives Ex: 1000/(5.7ms ) = 115 IOPS
Practical Calculation Accounting for R/W Ratio & Read Hits IOPS = 1000/[(Rs+L)*Rm*Read% + (Ws+L)*Write%] Taking into account the # of Spindles/Raid Group, the Raid Penalty and type of workload, one can easily Calculate the #of Spindles required to process a given Number of IOPS for a given workload type.
Sample Calculation 10,000 IOPS, 3/1 R/W 70% Read Hits, 100% Spindle Busy 10K RPM Drives (Rd Seek 5.2ms, Wr Seek 6.0ms) RAID 5 (3+1): 16 Array Groups (64 Drives) RAID 1 (2+2): 13 Array Groups (52 Drives)
Sample Calculation 10,000 IOPS, 3/1 R/W 70% Read Hits, 100% Spindle Busy 10K RPM Drives (Rd Seek 5.2ms, Wr Seek 6.0ms) RAID 5 (3+1): 16 Array Groups (64 Drives) RAID 1 (2+2): 13 Array Groups (52 Drives) 15K RPM Drives (Rd Seek 3.9ms, Wr Seek 4.5ms) RAID 5 (3+1): 11 Array Groups (44 Drives) RAID 1 (2+2): 10 Array Groups (40 Drives)
Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec
Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec 1996Fibre Channel100 MB/Sec 1998Fibre Channel200 MB/Sec
Channel Technology 1990Block Mux3-4.5 MB/Sec 1993ESCON17 MB/Sec 1996Fibre Channel100 MB/Sec 1998Fibre Channel200 MB/Sec 2000FICON100 MB/Sec 2002FICON200 MB/Sec
Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec
Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec ESCON544 MB/Sec Fibre3.2 GB/Sec
Channel Connectivity BMUX72 MB/Sec ESCON272 MB/Sec ESCON544 MB/Sec Fibre3.2 GB/Sec FICON3.2 GB/Sec FICON6.4 GB/Sec
Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB
Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB 1993ICDA512GB 1995ICDA1TB
Disk Subsystems , 3990 with Attached Disk 1991ICDA Technology4GB-32GB 1993ICDA512GB 1995ICDA1TB 1997RAID Subsystems5TB 2000RAID Subsystems75TB
IO Intensity Factors Disk Technology 5 MB to 180 GB Capacity 3600 to 15,000 RPM RAID Technology 5.25 ” to 3.5 ” to 1 ” (1GB to 180GB)
IO Intensity Factors Disk Technology 5 MB to 180 GB Capacity 3600 to 15,000 RPM RAID Technology 5.25 ” to 3.5 ” to 1 ” (1GB to 180GB) Channel Bandwidth & Connectivity 3.5 MB/Sec to 200MB/Sec, 64 Ports Disk Subsystems evolution 1 GB to 100 TB High Performance Subsystem
Growth Trends Demand for bandwidth is growing faster than capacity requirements
Shared Bus Architecture
Switch Architecture 2000 “(…) the most innovative technology), which built a SAN rather than a backbone bus into its Storage Sub-Systems to deliver exceptional performance and capacity flexibility.” Bob Zimmerman, Giga Group “The company’s new Switch Architecture further demonstrated their commitment to technological innovation and business- enabling solutions, and redefines the industry standard, once again.” Jack Scott, Evaluator Group, Inc.
Switched Fabric Architecture 3.2GB/s Control 3.2GB/s Control 3.2GB/s Data 3.2GB/s Data 100 Mhz x 2 Bytes = 200MB/Sec 200MB/Sec x 16 Paths = 3.2GB/Sec
Switch Architecture64GBCache 32 Hosts Connections: FC, Escon, FICON, iSCSI, NAS 32 Cache Connections 5 GB/s Bandwidth 5 GB/s Bandwidth Shared Memory - HSN 1) 4 paths / (CHA/DKA) 2) 32 paths / SM(Each side) Frequency : 166MHz Cache-HSN 1) 2 paths / (CHA/DKA) 2) 8 paths / (CSW for CHA/DKA) 3) 8 paths / (CSW for Cache) 4) 8 paths / (Cache) 5) 32 paths / DKC(CSW-Cache) 6) 16 paths / Cluster(CSW-Cache) 7) 32 paths / DKC (CHA/DKA-CSW) 8) 16 paths / Cluster (CHA/DKA-CSW) Frequency : 166MHz Up to 32 FC-AL backend paths 166 Mhz x 2 Bytes = 332MB/Sec 332MB/S x 32 Paths = 10.6GB/Sec Data Bandwidth Control
Paradigm Shift
Tangible Benefits Reduced Total Cost of Ownership Enables Massive Consolidation & Centralization Reduced complexity by simplifying storage networking environments with fewer switches, connections Simplified management Simplified and automated tools reduces time spend managing storage: people can be re-deployed for other tasks. Reduced software licensing and maintenance Through improved capacity utilization: less capacity then lower licensing and maintenance –One 6TB versus three 4TB –$700K plus Improved Environmental Costs Reduced floor space, power, cooling
Network Management Requires Open Standards-Based Approach Exchanging APIs leads to a growing web of proprietary interfaces Storage networks require an object-based Common Information Model (CIM), for management of mixed environments Web-Based Enterprise Management (WBEM), provides a standard management interface for existing Web servers CIM/WBEM is an industry accepted specification that provides a truly open and adaptive standard for heterogeneous storage management Software vendors write to an open interface No need for proprietary commitments Hardware vendors provide a common object- based management interface that still enables them to provide differentiation IHV 1 ISV 1 ISV 2 ISVn IHV 2 IHVn CIM ISV1ISV2ISVn IHV1IHV2IHVn CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn
The Importance of a Message Bus A CIM object enables ISVs to code to a common interface However, ISVs still need to communicate with each other to reduce management complexity A Simple Object Access Protocol (SOAP) message bus provides a standard interface for communication between ISV products New Application Framework should be based on a CIM/SOAP management message bus. CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn CIM/WBEM ISV1ISV2ISVn IHV1IHV2IHVn Management Message Bus: CIM/SOAP
High Performance, Open Computing Computer Measurements Group Thank You Yves Coderre