Download presentation
Presentation is loading. Please wait.
Published byDwain Ryan Modified over 8 years ago
1
Presented by: KAMONASISH HORE (100103003) RIPON DEB ROY (100103013) BINITA BONIA () PARTHA P. DAS (100103025) DHANJIT KALITA (100103032)
4
YOUR DATA IS LOST@#!! Do U hav backups of all ur data...??? -The stuff u cannot afford to loose...!! How often do u backups...??? -Daily,Weekly or Monthly... ?? Are they magnetic,optical or physical...?? How long would it take to totally recover from the disaster...??
5
RAID : REDUNDANT ARRAY OF INEXPENSIVE DISKS In 1987,Patterson,Gibson and Katz at the University of California Berkeley, published a paper entitled “A Case for Redundant Array of Inexpensive Disks (RAID)” The basic idea of RAID was to combine multiple, small inexpensive disks drive into an array of disks which yeilds performance exceeding that of a Single,Large Expensive Drive(SLED). Additionally this array of drives appers to the computer as a single logical storage unit or drive.
6
SOFTWARE AND HARDWARE RAID can be Software, Hardware or a combination of both. Generally speaking, Software RAID tends to offer duplication or mirroring, whilst Hardware RAID offers Parity-based protection. Software RAID uses more system resources as more disk ports and channels are required and it is subject to additional load during write and copy operations. Software RAID may have a lower cost than hardware RAID because it has no dedicated RAID controller, but may not have the same hotfix or performance capabilities. Software RAID is needed for mirroring to remote locations.
7
RAID 0 Striping at the level of blocks. Data split across in drives resulting in higher data throughout. Performance is very good but the failure of any disc in the array result in data loss. RAID 0 commonly referred to as striping. Reliability Problems : No mirroring or parity bits.
8
RAID 0
9
RAID 1 Introduce redundancy through mirroring. Expensive Performance Issues -No data loss if either drive fails. -Good read performance. -Resonable write performance. -Cost/Mb is high. -Commonly refrenced as “Mirroring”
10
RAID 1
11
RAID 2 Uses Hamming (or any other) error-correcting code(ECC) Intended for use in drives which do not have built-in error detection. Central idea is if one of the disc fails the remaining bits of byte are the associated ECC bits can be used to reconstruct the data. Not very popular.
12
RAID 2
13
RAID 3 Improves upon RAID 2,known as Bit-Interleaved Parity. Disc controller can detect whether a sector has been read correctly. Storage overhead is reduced – onlu 1 parity disk. Expenses of computing and writing parity. Need to include a dedicated parity hardware.
14
RAID 3
15
RAID 4 Stripes data at a block level across several drives, with parity stored on one drive – block interleaved parity. Allow recovery from the failure of any disks. Performance is very good for reads. Writes require that parity data be updated each time. Slows small random writes but large writes are fairly fast.
16
RAID 4
17
RAID 5 Block-interleaved Distributed parity. Spreads data and parity among all N+1 disks,rather than storing data in N disks and parity in 1 disk. Avoids potential overuse of a single parity disk- improvement over RAID 4. Most common parity RAID system.
18
RAID 5
19
RAID 6 Double-parity RAID, commonly known a s RAID 6,safeguards against data loss by allowing up to two consecutive drive failures.
20
CONCLUSION RAIDs offers a cost effective option to meet the challenge of exponential growth in the processor and memory speed. We believe the size reduction of personal computer is a key to success of disks arrays, just as Gordon Bell argues that the size reduction of microprocessors is a key to success in multiprocessors. With advantages in cost-performance, reliability,power consumption, and modular growth, we except RAIDs to replace SLEDs in future I/O Systems..
22
PERFORMANCE MEASURE OF DISKS The main measures of the qualities of disk are : 1. Access Time 2. Data Transfer Rate 3. Reliability
23
ACCESS TIME The time from when a read or write request is issued to when data transfer begins. Consists of : Seek time Average seek time Rotational latency or Rotational Delay.
24
SEEK TIME To access data on a given sector of a disk, the arm first must move so that it is positioned over the correct track, and then must wait for the sector to appear under it as the disk rotates. The time for repositioning the arm is called seek time. Typical seek time range from 2 to 30 milliseconds
25
AVERAGE SEEK TIME The average of the seek time, measured over a sequence of random requests. It is about one third of the worst case seek time.
26
ROTIONAL LATENCY Once the seek has occurred, the time spent waiting for the sector to be accesses to appear under the head is called rotational latency time. Typical rotational speed of disks range from 6 to 120 rotations per second or 4 to 11 millisecond.
27
DATA TRANSFER RATE Data transfer rate is the rate at which data can be retrieved from or stored to the disk. Current disk system support transfer rate from 1 to 5 megabyte per second. Data transfer rate depends on “data rate” and “transfer size”. Two kinds of data rate: Media rate Interface rate
28
MEDIA RATE Media rate depends on recording density and rotational speed. For example, a disk rotating at 5,400 rpm with 111 sectors(512byte each)per track will have a media rate 5Mbyte per second.
29
INTERFACE RATE Interface rate is how fast data can be transferred between the host and the disk drive over its interface. SCSI drives can do upto 20Mbyte per second over each 8-bite-wide transfer. IDE drives with the Ultra ATA interface support upto 33.3Mbytes per second.
30
TRANSFER TIME Transfer time equals to transfer size divided by data rate. The average media transfer time is 0.8ms, the average interface transfer time is 0.4ms. For example, the typical average time to do a random 4-kbyte I/O is. Overhead+seek+latency+transfer= 0.5ms+10ms+5.6ms+0.8ms=16.9ms.
31
RELIABILITY Measured by the mean time to failure. Mean Time To Failure(MTTF):- The average time the disk is expected to run continuously without any failure. Typically 3 to 5 years. The typical mean time to failure of disks today ranges from 30,000 to 800,000 hours.
33
OPTIMISATION OF DISK- BLOCK ACCESS Block - A contiguous sequence of sectors from a single track. Data is transferred between disk and main memory in blocks. Size range from 512 bytes to several kilobytes. → Smaller blocks-more transfers from disk → Larger blocks -more space wasted due to partially filled blocks → Typical block sizes today range from 4 to 16 kilobytes Access to data on disk is slower than access to data in main memory. Techniques developed for improving the speed of access to blocks on disk.
34
An Ordinary disk
35
MAGNETIC HARD DISK MECHANISM NOTE: Diagram is schematic, and simplifies the structure of actual disk drives
36
Buffering: buffer is that part of main memory available for storage of copies of disk blocks →There is always a copy kept on disk of every block, but the copy on disk may be a version of the block older than the version in the buffer → S ubsystem responsible for the allocation of buffer space is called the buffer manager. Scheduling : If several blocks from a cylinder need to be transferred from disk to main memory →Save access time by requesting the blocks in the order in which they will pass under the heads
37
→ if the desired blocks are on different cylinders, it is advantageous to request the blocks in an order that minimizes disk-arm movement. Disk-arm-scheduling algorithms order pending accesses to tracks so that disk arm movement is minimized Elevator algorithm : commonly used algorithm that move disk arm in one direction (from outer to inner tracks or vice versa), processing next request in that direction, till no more requests in that direction, then reverse direction and repeat File organization - optimize block access time by organizing the blocks to correspond to how data will be accessed
38
E.g. Store related information on the same or nearby cylinders. Files may get fragmented over time → E.g. if data is inserted to/deleted from the file → Or free blocks on disk are scattered, and newly created file has its blocks scattered over the disk → Sequential access to a fragmented file results in increased disk arm movement Some systems have utilities to defragment the file system, in order to speed up file access Nonvolatile write buffers -speed up disk writes by writing blocks to a non volatile RAM buffer immediately Non-volatile RAM: battery backed up RAM or flash memory → Even if power fails, the data is safe and will be written to disk when power returns
39
Controller then writes to disk whenever the disk has no other requests or request has been pending for some time Database operations that require data to be safely stored before continuing can continue without waiting for data to be written to disk Writes can be reordered to minimize disk arm movement Log disk -a disk devoted to writing a sequential log of block updates Used exactly like nonvolatile RAM → Write to log disk is very fast since no seeks are required → No need for special hardware [NV(Non Volatile)-RAM]
40
File systems typically reorder writes to disk to improve performance Journaling file systems, write data in safe order to NV-RAM or log disk Reordering without journaling: risk of corruption of file system data Log-based file system, data are not written back to their original destination on disk T he file system keeps track of where in the log disk the blocks were written most recently, and retrieves them from that location
41
Example of disk blocks being defragmented
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.