Download presentation
Presentation is loading. Please wait.
Published byMeredith Gertrude Glenn Modified over 9 years ago
1
Davie 5/18/2010
2
Thursday, May 20 th @ 5:30pm Ursa Minor Co-sponsored with CSS Guest Speakers Dr. Craig Rich – TBA James Schneider – Cal Poly “State of the Network” Address Sean Taylor – Reverse Engineering for Beginners Free food! 12/2/2015
3
Friday, May 21 st @ 5:30PM 98P 2-007 (You better know where this is!) Games Starcraft, TF2, FEAR, Bad Company 2 Linux, GotY Edition Consoles welcome Music Free food 12/2/2015
4
Wednesday, May 19 th @ 1:00pm Sean McAllister Structured Exception Handling 12/2/2015
5
Redundant Array of Inexpensive | Independent Disks 12/2/2015
6
Combining multiple physical devices to achieve increased performance and/or reliability Added benefit of a single, large device 12/2/2015
7
A backup solution. End of story. ▪ Stop arguing. ▪ You’re stupid. 12/2/2015
8
RAID functions by combining three concepts to achieve desired results Striping – Splitting data across multiple disks to maximize I/O bandwidth Mirroring – Storing a copy of the data across multiple disks to guard against drive failure Error-correction – Parity calculations to find and repair bad data. Also used to distribute data across drives 12/2/2015
9
Array – collection of disks that operate as one Degraded array – array where a component disk is missing, but the array can still function Failed array – array where enough disks are missing to prevent all functionality Hot spare – extra disk that will allow a degraded array to repair itself Won’t help failed arrays though Reshape – modify array size or level 12/2/2015
10
Levels 0-6 Nested RAID Combines two levels Just a Bunch Of Disks (JBOD) & Spanning Concatenates one disk to the end of the other No performance or reliability improvements 12/2/2015
11
Data is striped across multiple disks Minimum of two No redundancy Lose one disk, lose all data High read/write throughput Disks can read or write simultaneously without costly parity calculation Results in array of size n Difficult to reshape, and therefore expand 12/2/2015
13
Data is mirrored across multiple disks Minimum of two Full redundancy Lose all but 1 disk, data still good High read, low write throughput Read different simultaneously Write same data multiple times Results in array of size 1 Can be reshaped to RAID 5 12/2/2015
15
I don’t bother with these Neither should you RAID 2: Sounds like CS magic http://en.wikipedia.org/wiki/RAID_2#RAID_2 http://en.wikipedia.org/wiki/RAID_2#RAID_2 RAID 3 & 4: Striped with a single disk for parity Use RAID 5 or 6 instead 12/2/2015
16
Data is striped across multiple disks Minimum of three (unless you’re retarded like me) Parity is calculated and distributed Lose any 1 disk, parity allows it to be regenerated Increased overhead All reads and writes require calculations Results in array of size n-1 Can be reshaped to RAID 6 12/2/2015
18
Data is striped across multiple disks Minimum of three (unless you’re retarded like me) Parity is calculated and distributed Lose any 2 disks, parity allows regeneration Increased overhead All reads and writes require calculations Results in array of size n-2 Can be reshaped to RAID 5 12/2/2015
20
Combines striping and mirroring May tolerate multiple failures But specific combination of failures may ruin array Extremely inefficient space usage 12/2/2015
24
Dedicated CPU & RAM May include battery for cache High throughput for I/O Data on disk may be vendor-specific and not portable to other controllers (Controller died? Better have an exact replacement!) OS sees a single device from the BIOS, but may require additional driver 12/2/2015
25
Relies on host CPU for all calculations No battery for cache OS level drivers allow for maximum portability (within OS families of course) Native to Linux kernel (Woooo!) Windows, BSD, Solaris, OSX all have support 12/2/2015
26
Looks and acts like hardware RAID OS sees single BIOS device Requires vendor-specific driver Performs like software RAID Relies on host CPU & ram No cache battery 12/2/2015
27
Add filesystems OR Use Logical Volume Management (LVM) Then add filesystems Create a storage server Media Backups 12/2/2015
28
Physical Volumes (PV) Disks, partitions, arrays Volume Group (VG) Combines PVs into single pool of space Logical Volumes (LV) Create LVs inside the VG that act like partitions Don’t need to be continuous Can be added or resized at will 12/2/2015
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.