Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison

Slides:



Advertisements
Similar presentations
Tivoli SANergy. SANs are Powerful, but... Most SANs today offer limited value One system, multiple storage devices Multiple systems, isolated zones of.
Advertisements

Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
Hardware & the Machine room Week 5 – Lecture 1. What is behind the wall plug for your workstation? Today we will look at the platform on which our Information.
Introduction to Storage Area Network (SAN) Jie Feng Winter 2001.
Storage Networking Strategic Decision-Making Randy Kerns Evaluator Group, Inc.
EMC CLARiiON FC4700 Note to Presenter: This customer presentation introduces the CLARiiON FC4700 and highlights its new hardware and software capabilities.
Copyright ©2003 Digitask Consultants Inc., All rights reserved Storage Area Networks Digitask Seminar April 2000 Digitask Consultants, Inc.
Smart Storage and Linux An EMC Perspective Ric Wheeler
Storage area Network(SANs) Topics of presentation
1 Recap (RAID and Storage Architectures). 2 RAID To increase the availability and the performance (bandwidth) of a storage system, instead of a single.
Modern Distributed Systems Design – Security and High Availability 1.Measuring Availability 2.Highly Available Data Management 3.Redundant System Design.
I/O Channels I/O devices getting more sophisticated e.g. 3D graphics cards CPU instructs I/O controller to do transfer I/O controller does entire transfer.
Fibre Channel - Topologies & Protocols
Clustering & Fibre Channel Next wave in PC Computing.
Module – 12 Remote Replication
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice HP StorageWorks LeftHand update Marcus.
COEN 180 NAS / SAN. NAS Network Attached Storage (NAS) Each storage device has its own network interface. Filers: storage device that interfaces at the.
Storage Area Network (SAN)
Fibre Channel Maria G. Luna Objectives §Define what is Fibre Channel §Standards §Fibre Channel Architecture l Simple example of a Network Connection.
Agenda CS C446 Data Storage Technologies & Networks
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
COEN 180 NAS / SAN. Storage Trends Storage Trends: Money is spend on administration Morris, Truskowski: The evolution of storage systems, IBM Systems.
Trends in Storage Subsystem Technologies Michael Joyce, Senior Director Mylex & OEM Storage Subsystems IBM.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
THE EMC EFFECT Page.1 Building the ESN Infrastructure Doing business without barriers EMC Enterprise Storage Network.
Remote Replication Chapter 14(9.3) ISMDR:BEIT:VIII:chap9.3:Madhu N:PIIT1.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Chet Jacobs, Senior Storage Architect Enterprise Storage Group Compaq Computer March 2001 Chet Jacobs, Senior Storage Architect Enterprise Storage Group.
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
MA8000/EMA12000 Fibre Channel Product Set
LECTURE 9 CT1303 LAN. LAN DEVICES Network: Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Storage Area Network Presented by Chaowalit Thinakornsutibootra Thanapat Kangkachit
DAC-FF The Ultimate Fibre-to-Fibre Channel External RAID Controller Solution for High Performance Servers, Clusters, and Storage Area Networks (SAN)
CS/IS 465: Data Communication and Networks 1 CS/IS 465 Data Communications and Networks Lecture 28 Martin van Bommel.
MA8000/EMA12000 Fibre Channel Product Set Storage controllers, cache, batteries: B21 HSG80 RAID Array Controller w/ 256MB Cache B21 HSx80.
AlphaServer UNIX Resource Consolidation.
Chapter 21 Topologies Chapter 2. 2 Chapter Objectives Explain the different topologies Explain the structure of various topologies Compare different topologies.
Microsoft Virtual Academy Module 8 Managing the Infrastructure with VMM.
IBM T. J. Watson Research © 2004 IBM Corporation On Scalable Storage Area Network(SAN) Fabric Design Algorithm Bong-Jun Ko (Columbia University) Kang-Won.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
Copyright ©2003 Digitask Consultants Inc., All rights reserved Cluster Concepts Digitask Seminar November 29, 1999 Digitask Consultants, Inc.
Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002.
Chapter2 Networking Fundamentals
VMware vSphere Configuration and Management v6
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Peter Mattei HP Storage Consultant 16. May 2013
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
STORAGE ARCHITECTURE/ MASTER): Disk Storage: What Are Your Options? Randy Kerns Senior Partner The Evaluator Group.
Rehab AlFallaj.  Network:  Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and do specific task.
Internet Protocol Storage Area Networks (IP SAN)
Paul Dolan World-Wide Manager Disaster Tolerance Services Compaq Computer Limited Paul Dolan World-Wide Manager Disaster Tolerance Services Compaq Computer.
SMUCSE 8344 Storage Area Networks. SMUCSE 8344 What’s SANs.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
July 30, 2009opsarea meeting, IETF Stockholm1 Operational Deployment and Management of Storage over the Internet David L. Black, EMC IETF opsarea meeting.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Ryan Leonard Storage and Solutions Architect
Storage Area Networks The Basics.
Cisco MDS 9000 SAN Tap Services
Introduction to Networks
Module – 5 Fibre channel storage area network (FC SAN)
Praesideo Cobranet Interface
Keith Spayth ACSG 520 Dr. Alzoubi
Storage Networking Protocols
Keith Spayth ACSG 520 Dr. Alzoubi
Cost Effective Network Storage Solutions
Presentation transcript:

Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison

2 Discussion Topics FC basics HBS & DRM SANs Fibre Channel Tape Support SCSI/FibreChannel Fastpath FC 2001 Plans FC Futures

3 Fibre Channel ANSI standard network and storage interconnect –OpenVMS, and most others, use it for SCSI storage 1.06 gigabit/sec., full-duplex, serial interconnect –2gb in late 2001… 10gb over the next several years Long distance –OpenVMS supports 500M multi-mode fiber and 100 KM single-mode fiber –Longer distances with inter-switch ATM links, if DRM is used Large scale –Switches provide connectivity and bandwidth aggregation, to support hundreds of nodes

4 Topologies Arbitrated loop FC-AL (NT/UNIX today) –Uses Hubs (or new switch hubs) –Max. Number of nodes is fixed at 126 –Shared bandwidth Switched (SAN - VMS / UNIX / NT) –Highly scalable –Multiple concurrent communications –Switch can connect other interconnect types

5 Fibre Channel Link Technologies Multi-mode fiber –62.5 micron, 200 M –50 micron, 500 M (widely used) Single-mode fiber for Inter-Switch Links (ISLs) –9 micron, 100 KM DRM supports ISL gateway –T1/E1, T3/E3 or ATM/OC3 DRM also supports ISLs with Wave Division Multiplexors (WDM) and Dense Wave Division Multiplexing (DWDM)

6 Current Configurations Up to twenty switches (8 or 16-port) per FC fabric AlphaServer 800, 1000A*, 1200, 4100, 4000, 8200, 8400, DS10, DS20, DS20E, ES40, GS60, GS80, GS140, GS160 & GS320 Adapters (max) per host determined by the platform type: 2, 4, 8, 26 Multipath support - no single point of failure 100km max length * The AS1000A does not have console support for FC.

7 Long-Distance Storage Interconnect FC is the first long-distance storage interconnect –New possibilities for disaster tolerance Host-based Volume Shadowing Data Replication Manager (DRM)

8 A Multi-site FC Cluster FC Switch HSG FC host HSG FC host FC Switch Host-to-Host cluster communication 100KM max

9 FDDI T3 ATM CI, DSSI, MC, FDDI Gigabit Ethernet HBVS: Multi-site FC Clusters (Q4 2000) FC Switch HSG Alpha HSG Alpha FC Switch FC (100 KM) FC Switch host based shadow set = GigaSwitch

10 HBVS Multi-site FC Pro and Con Pro –High performance, low latency –Symmetric access –Fast failover Con –ATM bridges not supported until some time in late 2001 –Full shadow copies and merges are required today HSG write logging, after V7.3 –More CPU overhead

11 FC Switch DRM Configuration HSG FC host FC Switch HSG FC host FC Switch Cold stand-by nodes Host-to-Host cluster communication 100KM max

12 FC Switch DRM Configuration HSG Alpha FC Switch Host-to-Host (LAN/CI/DSSI/MC) HSG Alpha FC Switch FC (100 KM single mode) controller based remote copy set Cold stand-by nodes

13 DRM Pro and Con Pro –High performance, low latency –No shadow merges –Supported now, and enhancements are planned Con –Asymmetric access –Cold standby –Manual failover 15 min. Is typical

14 Storage Area Networks (SAN) Fibre channel, switches, HSG, together offer SAN capabilities –First components of Compaq’s ENSA vision Support non-cooperating heterogeneous and homogeneous operating systems, and multiple O.S. Cluster instances through –Switch zoning Controls which FC nodes can see each other Not required by OpenVMS –Selective Storage Presentation (SSP) HSG controls which FC hosts can access a storage unit Use an HSG access ID command More interoperability with support for transparent failover

15 Zoning, SSP, and Switch-to-Switch Fabrics SW HSG Sys1 Sys2 Sys3 Sys4 Zone A Zone B The HSG ensures that Sys1, Sys2 get one disk, and Sys3, Sys4 get the other

16 Cascaded SAN 8 Switch Cascaded 8x2 Switch Cascaded - 2 Fabrics  Well suited for applications where the majority of data access is local (eg.multiple Departmentals).  Scales easily for additional connectivity  Supports from 2 to 20 switches (~200 ports)  Supports centralized management and backup  Server/storage switch connectivity is optimized for higher performance  Design could be used for centralized or distributed access, provided that traffic patterns are well understood and factored into the design  Supports multiple fabrics for higher availabilities

17 Meshed SAN: 8 Switch Meshed 8x2 Switch Meshed - 2 Fabrics  Provides a higher availability since all switches are interconnected. Topology provides multiple paths between switches in case of (link) failure  Ideal for situations where data access is a mix of local and distributed requirements  Scales easily  Supports centralized management and backup  Supports from 2 to 20 switches  Supports multiple fabrics for higher availabilities

18 Ring SAN: 8 Switch Ring 8x2 Switch Ring - 2 Fabrics  Provides at least two paths to any given switch  Well suited for applications where data access is localized, yet provides the benefits of SAN integration to the whole Organization  Scaling is easy, logical and economical  Modular Design  Centralized management and backup  Non-disruptive expansion  Supports from 2 to 14 switches, and multiple fabrics

19 Skinny Tree Backbone SAN: 10 Switch Skinny Tree 10x2 Switch Skinny Tree 2 Fabrics  Highest fabric performance  Best for “many-to-many” connectivity and evenly distributed bandwidth throughout the fabric  Offers maximum flexibility for implementing mixed access types (local, distributed, centralized)  Supports centralized management and backup  Can be implemented across wide areas with interswitch distances up to 10 KM  Can be implemented with different availability levels, including multiple fabrics  Can be an upgrade path from other designs  Support 2 to 20 switches

20 Fibre Channel Tape Support (V7.3) Modular Data Router (FireFox) –Fibre Channel to parallel SCSI bridge –Connects to a single Fibre Channel port on a switch Multi-host, but not multi-path Can be served to the cluster via TMSCP Supported as a native VMS tape device by COPY, BACKUP, etc. ABS, MRU, SLS support is planned

21 Fibre Channel Tape Pictorial MDR (FireFox) FC Switch OpenVMS Alpha OpenVMS Alpha OpenVMS Alpha Tape Library Cluster host-to-host RAID Array Disk Controller OpenVMS Alpha or VAX

22 Fibre Channel Tape Support (V7.3) Planned device support –DLT 35/70 –TL891 –TL895 –ESL 9326D –SSL2020 (AIT drives 40/80) –New libraries with DLT8000 drives

23 Fibre Channel Tape Device Naming WWID uniquely identifies the device WWID-based device name SCSI mode page 83 or 80 $2$MGAn, where n is assigned sequentially Remembered in SYS$DEVICES.DAT Coordinated cluster-wide –Multiple system disks and SYS$DEVICES.DAT allowed

24 SCSI/Fibre Channel “Fast Path” (V7.3) Improves I/O scaling on SMP platforms –Moves I/O processing off the primary CPU –Reduces “hold time” of IOLOCK8 –Streamlines the normal I/O path –Pre-allocated “resource bundles” Round-robin CPU assignment of fast-path ports –CI, Fibre (KGPSA), parallel SCSI (KZPBA) Explicit controls available –SET DEVICE/PREFERRED_CPU –SYSGEN parameters fast_path fast_path_ports

25 FibreChannel 2001 Plans Multipath Failover to Served Paths –Current implementation supports failover amongst direct paths –High availability FC clusters want to be able to fail to served path when FC fails –Served path failover planned for V7.3-1 in late 2001

26 FibreChannel 2001 plans Expanded configurations –Greater than 20 switches per fabric –ATM Links –Larger DRM configurations

27 FibreChannel 2001 Plans HSG write logging –Mid/Late 2001 –Requires ACS 8.7

28 FibreChannel 2001 Plans 2Gb Links –End to end upgrade during 2001 –LP9002 (2Gb PCI adapter) –Pleadies 4 switch (16 2Gb ports) –HSVxxx (2Gb storage controller) 2Gb links to FC drives

29 FibreChannel 2001 Plans HSV Storage Controller –Follow on to HSG80/60 –Creates virtual volumes from physical storage –~2x HSG80 performance –248 physical FC drives (9TB) dual ported 15k rpm drives –2Gb interface to the fabric –2Gb interface to drives –Early Ship program Q3 2001

30 FibreChannel 2001 Plans SAN Management Appliance –NT based web server –Browser interface to SAN switches HSG60/80 HSV All future SAN based storage –Host based CLI interface also planned

31 FibreChannel Futures???? Low Cost Clusters –Low cost FC adapter –FC-AL switches –Low end storage arrays Native FC tapes Cluster traffic over FC Dynamic path balancing Dynamic volume expansion SMP distributed interrupts Multipath Tape Support IP over FC

32 Potential Backports FibreChannel Tapes MSCP Multipath Failover No Plans to backport SCSI or FC Fastpath

Fibre is good for you!