Tony TonyTony K.  Universal Paths ◦ Path to a resource is always the same ◦ No matter where you are  Transparent to clients ◦ View is one file system.

Slides:



Advertisements
Similar presentations
Storage Management Lecture 7.
Advertisements

© 2006 DataCore Software Corp SANmotion New: Simple and Painless Data Migration for Windows Systems Note: Must be displayed using PowerPoint Slideshow.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
1 CSC 486/586 Network Storage. 2 Objectives Familiarization with network data storage technologies Understanding of RAID concepts and RAID levels Discuss.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
Distributed Storage March 12, Distributed Storage What is Distributed Storage?  Simple answer: Storage that can be shared throughout a network.
Network-Attached Storage
Other File Systems: AFS, Napster. 2 Recap NFS: –Server exposes one or more directories Client accesses them by mounting the directories –Stateless server.
High Performance Computing Course Notes High Performance Storage.
TCP/IP - Security Perspective Upper Layers CS-431 Dick Steflik.
Module – 7 network-attached storage (NAS)
Storage Networking. Storage Trends Storage growth Need for storage flexibility Simplify and automate management Continuous availability is required.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 7 Configuring File Services in Windows Server 2008.
File Systems and N/W attached storage (NAS) | VTU NOTES | QUESTION PAPERS | NEWS | VTU RESULTS | FORUM | BOOKSPAR ANDROID APP.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo.
Configuring File Services Lesson 6. Skills Matrix Technology SkillObjective DomainObjective # Configuring a File ServerConfigure a file server4.1 Using.
Managing Storage Lesson 3.
Tony TonyTony K.  Allows a server to act as persistent storage ◦ For one or more clients ◦ Over a network  Usually presented in the same manner as a.
LAN / WAN Business Proposal. What is a LAN or WAN? A LAN is a Local Area Network it usually connects all computers in one building or several building.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
Chapter Sixteen Data Recovery and Fault Tolerance.
Page 1 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Networks, Inc
IT Essentials: PC Hardware and Software 1 Chapter 7 Windows NT/2000/XP Operating Systems.
Windows interoperability with Unix/Linux. Introduction to Active Directory Integration for Unix and Linux Systems Unix/Linux interoperability components.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Networked File System CS Introduction to Operating Systems.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Distributed Systems. Interprocess Communication (IPC) Processes are either independent or cooperating – Threads provide a gray area – Cooperating processes.
Module 9: Configuring Storage
Latest Relevant Techniques and Applications for Distributed File Systems Ela Sharda
MCTS Guide to Microsoft Windows Server 2008 Applications Infrastructure Configuration (Exam # ) Chapter Three Configuring Windows Server 2008 Storage.
IOS110 Introduction to Operating Systems using Windows Session 5 1.
DFS & Active Directory Joshua Hedges |Brandon Maxfield | Robert Rivera | Will Zilch.
What is a Distributed File System?? Allows transparent access to remote files over a network. Examples: Network File System (NFS) by Sun Microsystems.
Week #3 Objectives Partition Disks in Windows® 7 Manage Disk Volumes Maintain Disks in Windows 7 Install and Configure Device Drivers.
Configuring File Services. Using the Distributed File System Larger enterprises typically use more file servers Used to improve network performce Reduce.
VMware vSphere Configuration and Management v6
High Availability in DB2 Nishant Sinha
70-412: Configuring Advanced Windows Server 2012 services
WINDOWS SERVER 2003 Genetic Computer School Lesson 12 Fault Tolerance.
1 Objectives Discuss File Services in Windows Server 2008 Install the Distributed File System in Windows Server 2008 Discuss and create shared file resources.
Install, configure and test ICT Networks
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Storage Networking. Storage Trends Storage grows %/year, gets more complicated It’s necessary to pool storage for flexibility Intelligent storage.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
What is raid? RAID is the term used to describe a storage systems' resilience to disk failure through the use of multiple disks and by the use of data.
RAID TECHNOLOGY RASHMI ACHARYA CSE(A) RG NO
Network-Attached Storage. Network-attached storage devices Attached to a local area network, generally an Ethernet-based network environment.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Managing Storage Module 3.
(Distributed) FILE SYSTEMS
Storage HDD, SSD and RAID.
Storage Area Networks The Basics.
Configuring File Services
Bentley Systems, Incorporated
Storage Networking.
Unit OS10: Fault Tolerance
Introduction to Networks
Introduction to Networks
File Transfer and access
Module – 7 network-attached storage (NAS)
Storage Networking.
CS 295: Modern Systems Organizing Storage Devices
Presentation transcript:

Tony TonyTony K

 Universal Paths ◦ Path to a resource is always the same ◦ No matter where you are  Transparent to clients ◦ View is one file system ◦ Physical location of data is abstracted

 Microsoft DFS ◦ Suite of technologies ◦ Uses SMB as underlying protocol  Network File Systems (NFS) ◦ Unix/Linux  Andrew File System (AFS) ◦ Developed at Carnegie Mellon ◦ Used at: NASA, JPL, CERN, Morgan Stanley

 Allows a server to act as persistent storage ◦ For one or more clients ◦ Over a network  Usually presented in the same manner as a local disk  First developed in the 1970s ◦ Network File System (NFS)  Created in 1985 by Sun Microsystems  First widely deployed networked file system

 Ability to relocate volumes while online  Replication of volumes ◦ Most support read-only replicates ◦ Some support read-write replicates ◦ Allows load-balancing between servers  Partial access in failure situations ◦ Only volumes on a failed server unavailable ◦ Some support for turning a read-write replicate into a read-write master  Often support file caching ◦ Some support offline access

NFS

 NFSv1 ◦ ca ◦ Sun Internal release only  NFSv2 ◦ RFC 1094, 1989 ◦ UDP only ◦ Completely Stateless ◦ Locking, quotas, etc. outside protocol  Handled by extra RPC daemons

 NFSv3 ◦ RFC 1813, 1995 ◦ 64-bit file sizes and offsets ◦ Asynchronous writes ◦ File attributes included with other responses ◦ TCP support

 NFSv4 ◦ RFC 3010, 2000 ◦ RFC 3530, 2003 ◦ Protocol development handed to IETF ◦ Performance improvements ◦ Mandated security (Kerberos) ◦ Stateful protocol

 Network Lock Manager ◦ Supports Unix System V style file locking APIs  Remote Quota Reporting ◦ Allows users to view their storage quotas

 Mapping users for access control not provided by NFS  Central user management recommended ◦ Network Information Service (NIS)  Previously called Yellow Pages  Designed by Sun Microsystems  Created in conjunction with NFS ◦ LDAP + Kerberos is a modern alternative

 Design requires trusted clients (other computers) ◦ Read/write access traditionally given to IP addresses ◦ Up to the client to:  Honor permissions  Enforce access control  RPC and the port mapper are notoriously hard to secure ◦ Designed to execute function on the remote server ◦ Hard to firewall  An RPC is registered with the port mapper and assigned a random port

 NFSv4 solves most of the quirks ◦ Kerberos can be used to validate identity ◦ Validated identity prevents rogue clients from reading or writing data  RPC is still annoying

AFS

 1983 ◦ Andrew Project began at Carnegie Mellon  1988 ◦ AFSv3 ◦ Installations of AFS outside Carnegie Mellon  1989 ◦ Transarc founded to commercialize AFS  1998 ◦ Transarc purchased by IBM  2000 ◦ IBM releases code as OpenAFS

 AFS has many benefits over traditional networked file systems ◦ Much better security ◦ Uses Kerberos authentication ◦ Authorization handled with ACLs  ACLs are granted to Kerberos identities  No ticket, no data ◦ Clients do not have to be trusted

 Scalability ◦ High client to server ratio  100:1 typical, 200:1 seen in production ◦ Enterprise sites routinely have > 50,000 users ◦ Caches data on client ◦ Limited load balancing via read-only replicates  Limited fault tolerance ◦ Clients have limited access if a file server fails

 Read and write operations occur in file cache ◦ Only changes to file are sent to server on close  Cache consistency occurs via a callback ◦ Client tells server it has cached a copy ◦ Server will notify client if a cached file is modified  Callback must be re-negotiated if a time-out or error occurred ◦ Does not require re-downloading the file

 Volumes are the basic unit of AFS file space  A volume contains ◦ Files ◦ Directories ◦ Mount points for other volumes  Top volume is root.afs ◦ Mounted to /afs on clients ◦ Alternate is dynamic root ◦ Dynamic root populates /afs with all known cells

 Volumes can be mounted in multiple locations  Quotas can be assigned to volumes  Volumes can be moved between servers ◦ Volumes can be moved even if they are in use

 Volumes can be replicated to read-only clones  Read-only clones can be placed on multiple servers  Clients will choose a clone to access  If a copy becomes unavailable, client will use a different copy  Result is simple load balancing

 Whole-file locking ◦ Prevents shared databases ◦ Deliberate design decision  Directory-based ACLs ◦ Can not assign an ACL to an individual file  Complicated to setup and administer  No commercial backer ◦ Several consulting companies sell support

 Allow multiple clients to read/write files at same time  Designed for speed and/or redundancy  Often provide share-access to the underlying file system (block-level)  Clients can communicate with each other ◦ Lock negotiation ◦ Transferring of blocks  Many provide fault tolerance

 Lustre ◦ Cluster File Systems, Inc.  General Parallel File System (GPFS) ◦ IBM  Global File System (GFS) ◦ RedHat

SAN is not NAS backwards!

 Server that serves files to network attached systems ◦ CIFS (SMB) and NFS are two example protocols a NAS may use  Often used to refer to ‘turn-key’ solutions  Crude analogy: formatted hard drive ◦ Accessed through network

 Designed to consolidate storage space into one cohesive unit ◦ Note: can also spread data!  Pooling storage allows better fault tolerance ◦ RAID across disks and arrays ◦ Multiple paths between SAN and servers ◦ Secondary servers can take over for failed servers ◦ Analogy: unformatted hard drive

 SANs normally use block-level protocols ◦ Exposed to the OS as a block device  Same as a physical disk   can boot off of a SAN device ◦ Only one server can read/write a target block  Contrast to NAS access  Multiple servers can still be allocated to a SAN  SANs should never be directly connected to another network ◦ Especially not the Internet! ◦ SANs typically have their own network ◦ SAN protocols are designed for speed, not security

 Fibre Channel ◦ Combination interconnect and protocol ◦ Bootable  iSCSI ◦ Internet Small Computer System Interface ◦ Runs over TCP/IP ◦ Bootable  AoE (ATA over Ethernet) ◦ No overhead from TCP/IP ◦ Not routable ◦ Not a bad thing!

 File Servers  Database Servers  Virtual machine images  Physical machine images

SIDEBAR

 The disk hardware: ◦ Writes data to a specific location on the drive  On a selected platter  On a specified track  Rewriting a complete specific sector ◦ 1’s and 0’s ◦ Disks know nothing of files

 The OS: ◦ Keeps track of file names ◦ File system  Maps file names  To where on the disk the 1’s and 0’s are kept for the data  File may be broken up (fragmented)  To fit available space chunks on disk ◦ Specific part of the disk is reserved for this mapping  FAT, VFAT, FAT32, HPFS, NTFS, EXT2, EXT3, etc…

 Redundant Array of Independent (Inexpensive) Disks ◦ 2 or more drives ◦  1 drive  Various levels – RAID 0,1,2,3,4,5,6 ◦ Also 10, 50, 60  RAID function can be done via hardware or software

 RAID 0: Striped ◦ 2 or more drives configured as one  Data is striped across multiple drives ◦ Pluses:  “Larger” capacity  Speed  Can access data from multiple drives simultaneously ◦ Minuses:  Higher Failure rate than single drive  Total data loss if one drive fails

 RAID 1: Mirrored ◦ 2 or more drives configured as one  Usually 2 drives  Data copied (duplicated) on both drives ◦ Pluses:  Lower “failure” rate  No data loss if one drive fails ◦ Minuses:  “Wasted” drive space  e.g. 50% for a 2 drive system  Small performance impact

 RAID 5: Parity ◦ 3 or more drives configured as one  Data spread across multiple drives with parity  If one drive fails, data can be rebuilt ◦ Pluses:  “Larger” drive  Can survive one drive failure ◦ Minuses:  More complex  Some “wasted” drive space

 RAID 6: Enhanced RAID 5 ◦ 4 or more drives configured as one  Additional parity block ◦ Pluses:  Improved data safety  Can survive 2 drive failures ◦ Minuses:  More “wasted” space  Write penalty

 Combinations of 0 and 1  RAID 01 ◦ Mirror of stripes  RAID 10: ◦ Stripe of mirrors  4 or more drives configured as one ◦ Pluses:  Speed  Data safety ◦ Minuses:  “Wasted” capacity

 No common definition ◦ RAID 50: Combination of 5 and 0  Sometimes referred to as 5+0 ◦ RAID 60: Combination of 6 and 0  Sometimes referred to as 6+0

 SAN can be “RAIDed” ◦ SAN blocks spread across the network