Storage Area Networks The Basics.

Slides:



Advertisements
Similar presentations
Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Advertisements

NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
Windows Server 2012 Storage: Windows Gets a Bit SANer Presented by Mark on twitter 1 V2.00. contents copyright 2013 Mark.
SAN Last Update Copyright Kenneth M. Chipps Ph.D. 1.
1 CSC 486/586 Network Storage. 2 Objectives Familiarization with network data storage technologies Understanding of RAID concepts and RAID levels Discuss.
Network-Attached Storage
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
Implementing Failover Clustering with Hyper-V
Storage Networking. Storage Trends Storage growth Need for storage flexibility Simplify and automate management Continuous availability is required.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
Storage Cheap or Fast, Pick One. Storage Great--you can do a lot of computation. But this often generates a lot of data. Where are you going to put it?
BACKUP/MASTER: Immediate Relief with Disk Backup Presented by W. Curtis Preston VP, Service Development GlassHouse Technologies, Inc.
Elad Hayun Agenda What's New in Hyper-V 2012 Storage Improvements Networking Improvements VM Mobility Improvements.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Managing Storage Lesson 3.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
This courseware is copyrighted © 2011 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
Guide to Linux Installation and Administration, 2e1 Chapter 3 Installing Linux.
Module 9: Configuring Storage
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
Disk Interfaces Last Update Copyright Kenneth M. Chipps Ph.D. 1.
AoE and HyperSCSI on Linux PDA Prepared by They Yu Shu.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
ISCSI. iSCSI Terms An iSCSI initiator is something that requests disk blocks, aka a client An iSCSI target is something that provides disk blocks, aka.
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Storage Networking. Storage Trends Storage grows %/year, gets more complicated It’s necessary to pool storage for flexibility Intelligent storage.
SMUCSE 8344 Storage Area Networks. SMUCSE 8344 What’s SANs.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Managing Storage Module 3.
Introduction To Storage
Open-E Data Storage Software (DSS V6)
Integrating Disk into Backup for Faster Restores
Lesson 8: Creating and Configuring Virtual Machine Storage
Video Security Design Workshop:
Network-Attached Storage
Nexsan iSeries™ iSCSI and iSeries Topologies Name Brian Montgomery
Network Attached Storage Overview
Guide to Linux Installation and Administration, 2e
Enterprise Computing Applications
Direct Attached Storage and Introduction to SCSI
Storage Networking.
Chapter 12: Mass-Storage Structure
SAN and NAS.
Introduction to Networks
Introduction to Networks
Direct Attached Storage Overview
Storage Virtualization
Lecture 17: Storage Systems
Module – 7 network-attached storage (NAS)
Direct Attached Storage and Introduction to SCSI
Storage Networking.
Chapter 12: Mass-Storage Systems
Storage Networks and Storage Devices
Storage Networking Protocols
Latest trends and technologies in Storage Networking
Unit 11- Computer Networks
Building continuously available systems with Hyper-V
Cost Effective Network Storage Solutions
Network customization
CS 295: Modern Systems Organizing Storage Devices
Improving performance
Task 36a Scope – Storage (L=ChrisH)
Presentation transcript:

Storage Area Networks The Basics

Storage Area Networks SANS are designed to give you: • More disk space • Multiple server access to a single disk pool • Better performance • Option of disk distributed across multiple locations

Direct Attached Storage Classically, for storage we had a single box with a bunch of disks attached: Public Network LUN0 LUN1 LUN2 Server SCSI Bus

Attached Storage The server speaks to the SCSI disks using a command language: • Read from LUN0, Block 123 • Write to LUN1, Block 456 All this goes over the SCSI bus, which is directly attached to the server; only that server has access to the bus The server would create a filesystem on the disk(s) and could then make the disk available to other computers via NFS, Samba, etc.

Network Attached Storage This idea is easily extended to an appliance approach. Configure a utility box with some disk that does only NFS or Samba/SMB, place on network Public Network NAS Server NFS Client NFS Server SCSI Bus

NAS and Servers Redundant web servers share the same data--but they both talk to the same NFS server Public Network NAS Server Web server, data NFS mounted NFS Server SCSI Bus Web server, data NFS mounted

Attached Storage We can also do things like place a RAID array on the NAS server. This works, but it has some limitations: • If the server goes down, there is no access to the disk • File sharing goes through the network storage server and across the network, which can be slow • Limitations on location of disks--must be near server, within range of the disk bus • Adding or subtracting disk space can be difficult What we want is a shared disk pool that all servers can access

Storage Area Network What we want is something that looks like this: NFS Client Public Ethernet Net SAN Participants Disk Pool

Storage Area Network Notice: • You can take down a server and still maintain access to the disk pool via the other SAN participants • Disk added to the pool is available to all servers, not just one • Shared, high speed access to the disk pool; can run clustered copies of SQL database or web server if the SQL databases or web servers are also SAN participants • Can still serve up the disk pool via an NFS or SMB server on a SAN-connected box • “serverless backups”--just send command to copy blocks from disk A to disk B. Snapshots easier, shortened backup windows--you can have a SAN particpant handle moving a volume to tape

Storage Area Network So how does this work? It’s a scaled up version of the old system. • The commands being sent are the same disk standard commands: either SCSI or ATA disk bus commands, READ, WRITE, etc. • The network connecting the SAN servers to the disk is typically (but not always) higher speed, eg FibreChannel • Some extra glue to allow for concurrent access by more than one server--need a shared filesystem • Special filesystems to allow for concurrent access

Storage Area Network A popular choice: • SCSI for the bus commands (commands sent over the wire) • Fiber Channel for the SAN network • EMC or similar for the glue volume software Fiber Channel is 2+ Gbit/sec, and can be deployed across up to a 500m distance (sometimes) and up to 70 KM with special equipment

Storage Area Network Another option is to use gigabit ethernet for the SAN networking. • Cheap! Commodity equipment, don’t need to learn new Fiber Channel stuff, reuse existing gear • But also lower performance--fibre channel has higher BW, and can use more of it.

ATA Over Ethernet AoE uses ethernet plus ATA bus commands rather than SCSI. Low cost; but since ethernet frames are not routable all devices must be on the same network

iSCSI iSCSI uses SCSI bus commands over ethernet, encapsulated inside of TCP/IP • Cheap hardware! • Well supported in Linux, Solaris, and Windows world • Because the SCSI is inside of TCP/IP, it is routable--which means you can do a SAN across wide area networks (with lower performance due to latency) and do things like mirror for disaster backup, or across campus on high performance networks • Processing TCP/IP takes some overhead; some use TCP offload chips

iSCSI Each “disk”/LUN is a RAID array that understands iSCSI. NFS Client Public Ethernet Net

iSCSI The green network is a dedicated (usually) gigabit ethernet network that carries the SCSI commands encapsulated inside TCP/IP. The red network connects the SAN participants to other clients not on the SAN Important point: TCP/IP is routable. That means that (modulo latency) the devices can be located anywhere. We could have a iSCSI SAN participant in Root Hall, and one in Spanagel. The Root iSCSI server can access the disk pool in Spanagel We could also have a volume located at Fleet Numeric in the same SAN The price we pay for this is having to process the TCP/IP overhead as iSCSI commands go up the network protocol stack. This can be alleviated in part by TCP offload chips

Volume Software Remember, the iSCSI targets are just block devices. iSCSI says nothing about concurrent access or multiple hosts accessing the same devices For that we need a SAN Filesystem. This deconflicts concurrent access by hosts to the block devices

Volume Software NFS Client Public Ethernet Net Vol2 Vol1

SAN Software The “volume software” allows you to build a concurrent access filesystem out of one or more LUNs

iSCSI Example: Five compute servers need read access to one weather data set. If the servers are all on the SAN, they can directly access the data Example: backup. Copy disk blocks directly, then have a tape drive SAN participant copy to tape Example: storage expansion. Just add more disk, and it is available to all SAN participants

Competitors iSCSI’s competitor is for the most part fibre channel. The concept of fiber channel is almost identical, but the SCSI commands are simply encapsulated in a fibre channel frame Fibre channel is typically higher performance--more data can be pushed across FC, and there is much less overhead processing FC frames BUT it is higher cost ATA Over Ethernet is very similar to FC in concept--directly inserting the ATA commands in ethernet frames. But it seems to have less market penetration