1 © 2002 hp Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary 23May 2003 Presentation.

Slides:



Advertisements
Similar presentations
Redundant Array of Independent Disks (RAID) Striping of data across multiple media for expansion, performance and reliability.
Advertisements

RAID Redundant Array of Independent Disks
Module – 3 Data protection – raid
XtremIO Data Protection (XDP) Explained
File Systems.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
The TickerTAIP Parallel RAID Architecture P. Cao, S. B. Lim S. Venkatraman, J. Wilkes HP Labs.
1 Magnetic Disks 1956: IBM (RAMAC) first disk drive 5 Mb – Mb/in $/year 9 Kb/sec 1980: SEAGATE first 5.25’’ disk drive 5 Mb – 1.96 Mb/in2 625.
R.A.I.D. Copyright © 2005 by James Hug Redundant Array of Independent (or Inexpensive) Disks.
Chapter 3 Presented by: Anupam Mittal.  Data protection: Concept of RAID and its Components Data Protection: RAID - 2.
2P13 Week 11. A+ Guide to Managing and Maintaining your PC, 6e2 RAID Controllers Redundant Array of Independent (or Inexpensive) Disks Level 0 -- Striped.
Solaris Volume Manager M. Desouky. RAID Overview SDS Software SDS Installation SDS User Interfaces MD State Database Concats & Stripes Mirrors Hot Spares.
This courseware is copyrighted © 2011 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
REDUNDANT ARRAY OF INEXPENSIVE DISCS RAID. What is RAID ? RAID is an acronym for Redundant Array of Independent Drives (or Disks), also known as Redundant.
File Management Systems
Cse Feb-001 CSE 451 Section February 24, 2000 Project 3 – VM.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Module – 11 Local Replication
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
THE HP AUTORAID HIERARCHICAL STORAGE SYSTEM J. Wilkes, R. Golding, C. Staelin T. Sullivan HP Laboratories, Palo Alto, CA.
RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang, Hai Jin, and Roy Ho.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 6 – RAID ©Manuel Rodriguez.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
Page 19/4/2015 CSE 30341: Operating Systems Principles Raid storage  Raid – 0: Striping  Good I/O performance if spread across disks (equivalent to n.
LAN / WAN Business Proposal. What is a LAN or WAN? A LAN is a Local Area Network it usually connects all computers in one building or several building.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
Storage and NT File System INFO333 – Lecture Mariusz Nowostawski Noria Foukia.
CSE 321b Computer Organization (2) تنظيم الحاسب (2) 3 rd year, Computer Engineering Winter 2015 Lecture #4 Dr. Hazem Ibrahim Shehata Dept. of Computer.
Two or more disks Capacity is the same as the total capacity of the drives in the array No fault tolerance-risk of data loss is proportional to the number.
Architecture of intelligent Disk subsystem
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
Nexenta Proprietary Global Leader in Software Defined Storage Nexenta Technical Sales Professional (NTSP) COURSE CONTENT.
Copyright © 2014 EMC Corporation. All Rights Reserved. Block Storage Provisioning and Management Upon completion of this module, you should be able to:
Module 9: Configuring Storage
© Pearson Education Limited, Chapter 16 Physical Database Design – Step 7 (Monitor and Tune the Operational System) Transparencies.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
1/14/2005Yan Huang - CSCI5330 Database Implementation – Storage and File Structure Storage and File Structure.
Multi-level Raid Multi-level Raid 2 Agenda Background -Definitions -What is it? -Why would anyone want it? Design Issues -Configuration and.
Windows Server 2003 硬碟管理與磁碟機陣列 林寶森
Configuring Disk Devices. Module 4 – Configuring Disk Devices ♦ Overview This module deals with making partitions using fdisk, implementing RAID and Logical.
"1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts.
Chapter 12 – Mass Storage Structures (Pgs )
Page 1 Mass Storage 성능 분석 강사 : 이 경근 대리 HPCS/SDO/MC.
GPFS: A Shared-Disk File System for Large Computing Clusters Frank Schmuck & Roger Haskin IBM Almaden Research Center.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
This courseware is copyrighted © 2016 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
Answer to Summary Questions
Basic Concepts and Terminology
RAID Overview.
RAID, Programmed I/O, Interrupt Driven I/O, DMA, Operating System
Fujitsu Training Documentation RAID Groups and Volumes
Vladimir Stojanovic & Nicholas Weaver
Disk management (October 21, 2015) © Abdou Illia, Fall 2015.
Local Replication – Business Copy
Storage Virtualization
RAID RAID Mukesh N Tekwani
ICOM 6005 – Database Management Systems Design
THE HP AUTORAID HIERARCHICAL STORAGE SYSTEM
Overview Continuation from Monday (File system implementation)
TECHNICAL SEMINAR PRESENTATION
UNIT IV RAID.
Mark Zbikowski and Gary Kimura
RAID RAID Mukesh N Tekwani April 23, 2019
Presentation transcript:

1 © 2002 hp Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary 23May 2003 Presentation slides on this topic courtesy of: chet jacobs senior technical consultant, and karen fay senior technical consultant

© 2002 hp page 2 HSV110 storage system virtualization techniques

© 2002 hp page 3 HSV110 virtualization : subjects covered distributed virtual RAID versus conventional RAID. disk group characteristics. virtual disk ground rules. virtual disk Leveling. distributed sparing. redundant storage sets. Snapshot and SnapClone implementation. configuration remarks.

© 2002 hp page 4 HSV110 virtualization : distributed versus conventional RAID performance limited by # of disk drives in StorageSet possible to find customer data if one knows the LBN and chunksize. load balancing required of application and databases over available backend (SCSI) busses I/Os balanced across StorageSet performance limited by # of disk drives in disk group customer data distributed across all disks in a group eliminate load balancing procedures for applications and databases. I/Os balanced across disk group conventional RAID distributed virtual RAID

© 2002 hp page 5 HSV110 virtualization : conventional versus distributed virtual RAID HSG80 RAID sets SCSI Bus 1 SCSI Bus 2 SCSI Bus 3 SCSI Bus 4 SCSI Bus 5 SCSI Bus 6 RAID 5 volume RAID 0 volume RAID 1 volume HSV110 DVR R A I D 5 v o l u m e R A I D 1 v o l u m e R A I D 0 v o l u m e workload evenly distributed across all spindles

© 2002 hp page 6 HSV110 virtualization: disk group characteristics minimum: 8 physical disk drives VRAID5 requires 5 physical disk spindles, minimum (no problem) VRAID1 uses even number of spindles maximum : # of physical disk drives present will automatically choose spindles across shelves (in V2) maximum # of disk groups per subsystem: 16 net capacity TBD (as disk capacities grows, it will change) contains the spare disk space 0, 1, or 2 disk failures called “none, single or double” in element manager chunk size 2 MB (fixed), PSEG

© 2002 hp page 7 HSV110 virtualization: virtual disk ground rules virtual disk redundancy: VRAID0 (none): data is striped across all physical disks in the disk group. VRAID5 (moderate): data is striped with parity across all physical disks in the disk group. always 5 (4+1) physical disks per stripe are used. VRAID1 (high): data is striped mirrored across all physical disks (even number of them) in the disk group. established pairs of physical disks mirror each other.

© 2002 hp page 8 conventional RAID5 algorithm etc... disk 0 etc... disk 1 etc... disk 2 etc... disk 4 etc... disk 3 CHUNK 00CHUNK 01CHUNK 02 Parity 00,01,02,03 CHUNK 03 CHUNK 04CHUNK 05CHUNK 06 Parity 04,05,06,07 CHUNK 07 CHUNK 08CHUNK 09CHUNK 10 Parity 08,09,10,11 CHUNK 11 Data Parity Data Parity Data Parity Data Parity Data Parity LBN LBN LBN LBN LBN Data virtual disk address space Data

© 2002 hp page 9 VRAID5 algorithm etc... disk 0 etc... disk 1 etc... disk 2 CHUNK CHUNK 01CHUNK 03 Parity 05CHUNK etc... disk 3 etc... disk 4 CHUNK 04CHUNK CHUNK 02 CHUNK Data LBN LBN LBN LBN LBN Data virtual disk address space Parity  always 4+1 RAID5  guaranteed to have each PSEG on a separate spindle in disk group

© 2002 hp page 10 VRAID1 algorithm etc... disk 0 etc... disk 1 etc... disk 2 etc... disk 4 etc... disk 3 Data LBN LBN LBN LBN LBN Data virtual disk address space Data

© 2002 hp page 11 HSV110 virtualization virtual disk leveling goal is to provide proportional capacity leveling across all disk drives within the disk group. example 1: disk group = 100 drives, all 18G –All disks will contain 1% of the virtual disk. example 2: disk group = 100 drives, 50*72G, 50*36G –each 72G disk will contain > 1% of the V.D. –approximately double the share of the 36GB drives, because it is double the capacity –each 36G disk will contain < 1% of the V.D. load balancing is achieved through capacity leveling

© 2002 hp page 12 HSV110 virtualization : virtual disk leveling dynamic pool capacity changes pool capacity can be added in small increments (1 disk minimum) R A I D 5 V o l u m e R A I D 0 V o l u m e R A I D 1 V o l u m e need more capacity or performance in a disk group + add more spindles disks running at optimum throughput (dynamic load balancing) R A I D 0 V o l u m e R A I D 1 V o l u m e = R A I D 5 V o l u m e available for expansion

© 2002 hp page 13 HSV110 virtualization : distributed sparing note: we no longer spare in separate spindles chunks allocated, but not dedicated as spares, on all disk drives of disk group to survive 1 or 2 disk drive failures. allocation algorithm single (1) = capacity of 2 * largest spindle in disk group double (2) = capacity of 4 * largest spindle in disk group hint: spindles have a semi-perm paired relationship for VRAID1… thats why 2 times

© 2002 hp page 14 HSV110 virtualization : distributed sparing example # 1: disk group : 8 * 36G, protection level : single (1). –total disk group size ? –288 GB –spare allocation ? –72 GB –maximum size for total virtual disks in disk group ? –216 GB note: minus overhead for metadata & formatted disk reduction & binary to decimal conversion

© 2002 hp page 15 HSV110 virtualization : distributed sparing example # 2: disk group : 120 * 36G, protection level: double (2) –total disk group size ? –4.32 TB –spare allocation ? –144 GB –maximum size for total virtual disks in disk group ? –4.176 TB note: minus overhead for metadata & formatted disk reduction & binary to decimal conversion

© 2002 hp page 16 HSV110 virtualization : distributed sparing example # 3: disk group : 8 * 36G & 6 * 72G, protection level : single (1). –total disk group size ? –720 GB –spare allocation ? –144 GB –maximum size for total virtual disks in disk group ? –576 GB note: minus overhead for metadata & formatted disk reduction & binary to decimal conversion

© 2002 hp page 17 HSV110 virtualization : distributed sparing virtual disk blocks automatically regenerated to restore redundancy. high redundant volume (RAID 1) moderate redundant volume (RAID 5) available storage space (virtual space = 2 disks) redundancy temporarily compromised high redundant volume (RAID 1) moderate redundant volume (RAID 5) available storage space (virtual space <1 disk) redundancy automatically regenerated data regenerated and distributed across the virtual pool

© 2002 hp page 18 best practices for disk groups when using mostly VRAID1; use even spindle counts in disk groups if you need to isolate performance or disk failure impacts, use separate groups; example: log file for a database should be in a different group than the data area. try keeping disk groups to like disk capacities and speeds but…bring unlike drive capacities into disk group in pairs

© 2002 hp page 19 HSV110 virtualization : redundant storage sets reduces chance of data loss in Large (> 12 physical disks) disk groups. not visible on user interface. complete managed by the HSV controllers. typical size : physical disks. as soon as redundant storage sets exceeds 12 physical disks, it splits into 2 RSSs of 6 disks each. failed disk drive recovery (VRAID5) restricted to affected redundancy storage set only.

© 2002 hp page 20 HSV110 storage system point-in-time copy techniques

© 2002 hp page 21 HSV110 virtualization : Snapshot and SnapClone implementation Snapshot : data is copied from virtual disk to Snapshot on demand (before its modified on parent volume). space efficient - “virtually capacity free” : –chunks will be allocated in the disk group on demand. –Snapshot removed if disk group becomes full. space guaranteed - “standard” : –chunks are allocated in the disk group at moment of Snapshot creation. –Snapshot allocation remains available if disk group becomes full. 7 active snapshots per parent volume (in V2) must live in the same disk group as parent “preferred” pathed by the same controller as the parent volume

© 2002 hp page 22 HSV110 virtualization : Snapshot and SnapClone implementation SnapClone – “virtually instantaneous SnapClone” (COPY) : data is copied from virtual disk to SnapClone in background –chunks are allocated in the disk group at moment of SnapClone creation. can be presented to a host and used immediately any group may be the home for the SnapClone (in V2) SnapClone’s raid level will match parent volume (for now) independent volume when fully realized –may be preferred pathed to either controller yuck!

© 2002 hp page 23 HSV110 virtualization : space guarantee Snapshot creation and utilization time volume “A” snap of “A” volume “A” 12:00 noon 12:10 snap of “A” (contents as of noon) contents identical contents different updates T1 updates T3 12:20 volume “A” snap of “A” (contents as of noon) contents different 12:05 volume “A” receives updates 12:15 volume “A” receives more updates

© 2002 hp page 24 HSV110 virtualization : space efficient Snapshot creation and utilization time volume “A” snap of “A” volume “A” 12:00 noon 12:10 snap of “A” (contents as of noon) contents identical contents different updates T1 updates T3 12:20 volume “A” snap of “A” (contents as of noon) contents different 12:05 volume “A” receives updates 12:15 volume “A” receives more updates

© 2002 hp page 25 HSV110 virtualization: Snapshot versus Snapclone