-1.1- Chöông 10: Heä Thoáng File 10.C. 2 Chöông 10: Heä Thoáng File  Beân trong ñóa cöùng  Caùc giaûi thuaät ñònh thôøi truy caäp ñóa  Ñònh daïng,

Slides:



Advertisements
Similar presentations
Moät caây lôùn, ñöôïc vun troàng treân thöûa ñaát toát ………... Töø Valdocco... Töø ngöôøi ñôøi...Töø nhöõng neàn vaên hoùa... Töø Mornese.
Advertisements

I/O Management and Disk Scheduling
Faculty of Information Technology Department of Computer Science Computer Organization Chapter 7 External Memory Mohammad Sharaf.
RAID Redundant Array of Independent Disks
CS 6560: Operating Systems Design
Hoài qui phi tuyeán Hoài qui phi tuyeán vaø hoài qui tuyeán tính : Moâ hình hoài qui ñöôïc goïi laø phi tuyeán neáu caùc ñaïo haøm cuûa moâ hình töông.
Trao đ ổ i tr ự c tuy ế n t ạ i:
Disk Scheduling Based on the slides supporting the text 1.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Disks and RAID.
Chapter III1 KYÕ THUAÄT NAÂNG - VAÄN CHUYEÅN CHÖÔNG 3 DAÂY & CAÙC CHI TIEÁT QUAÁN, HÖÔÙNG DAÂY (WIRE ROBES AND CHAINS FOR HOISTING AND HAULAGE. LIFTING.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
PHAÂN TÍCH CHUOÃI THÔØI GIAN (time series analysis)
1 Storage (cont’d) Disk scheduling Reducing seek time (cont’d) Reducing rotational latency RAIDs.
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
Based on the slides supporting the text
1 Disk Scheduling Chapter 14 Based on the slides supporting the text.
Disks CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Secondary Storage CSCI 444/544 Operating Systems Fall 2008.
CHUYEÂN ÑEÀ ÑIEÄN HOÙA HOÏC. I.PHAÛN ÖÙNG OXI HOÙA - KHÖÛ Laø phaûn öùng coù söï thay ñoåi soá oxi hoùa cuûa caùc nguyeân toá  Coù söï trao ñoåi e 
Operating Systems COMP 4850/CISG 5550 Disks, Part II Dr. James Money.
12.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 12: Mass-Storage Systems.
Disk and I/O Management
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 6 – RAID ©Manuel Rodriguez.
1 st EÂ TEØO! XUOÁNG LAØM LAÏI CAÙI NAØY THEO HEÄ MEÙT CHÖ KHOÂNG PHAÛI HEÄ INCHES.
CS 346 – Chapter 10 Mass storage –Advantages? –Disk features –Disk scheduling –Disk formatting –Managing swap space –RAID.
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
1 Recitation 8 Disk & File System. 2 Disk Scheduling Disks are at least four orders of magnitude slower than main memory –The performance of disk I/O.
CSE 321b Computer Organization (2) تنظيم الحاسب (2) 3 rd year, Computer Engineering Winter 2015 Lecture #4 Dr. Hazem Ibrahim Shehata Dept. of Computer.
Mass storage Structure Unit 5 (Chapter 14). Disk Structures Magnetic disks are faster than tapes. Disk drives are addressed as large one- dimensional.
1 Operating Systems Part VI: Mass- Storage Structure.
RAID COP 5611 Advanced Operating Systems Adapted from Andy Wang’s slides at FSU.
Lecture 9 of Advanced Databases Storage and File Structure (Part II) Instructor: Mr.Ahmed Al Astal.
ÑIEÄN TAÂM ÑOÀ CAÊN BAÛN. BS. Ñinh Hieáu Nhaân BS. Ñinh Hieáu Nhaân.
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
1Fall 2008, Chapter 12 Disk Hardware Arm can move in and out Read / write head can access a ring of data as the disk rotates Disk consists of one or more.
-1.1- Chöông 10: Heä Thoáng File 10.C. 2 Chöông 10: Heä Thoáng File  Beân trong ñóa cöùng  Caùc giaûi thuaät ñònh thôøi truy caäp ñóa  Ñònh daïng,
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
Disks Chapter 5 Thursday, April 5, Today’s Schedule Input/Output – Disks (Chapter 5.4)  Magnetic vs. Optical Disks  RAID levels and functions.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 14: Mass-Storage Systems Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O and Disk Scheduling CS Spring Overview Review of I/O Techniques I/O Buffering Disk Geometry Disk Scheduling Algorithms RAID.
CHÖÔNG 6: CAÛM BIEÁN ÑO LÖÔØNG 6.1 Ñaëc tính toång quaùt 6.2 Caûm bieán daïng ñieän trôû 6.3 Caûm bieán daïng ñieän caûm 6.4 Caûm bieán daïng hoå caûm.
MAÏNG SOÁ LIEÄU CHÖÔNG 2: LAN (Local Area Network)
Do söï giuùp ñôõ cuûa: Cuoäc Haønh Trình tôùi Laøm Chuû Caên Nhaø cuûa Mình Laøm ñôn giaûn… Bôûi: Phaân Ngaønh Tieáp Thò Ña Vaên Hoùa.
1.Moãi khi coù nhaân vieân nghæ vieäc hoaëc trong doanh nghieäp phaùt sinh theâm coâng vieäc môùi thì xuaát hieän nhu caàu nhaân söï. 2.Caàn xem xeùt lieäu.
6.5 CAÛM BIEÁN ÑIEÄN DUNG ( Capacitive transducer ) 1- Caáu taïo vaø nguyeân lyù: 2 loaïi : daïng ñôn ( hai baûn cöïc ) vaø daïng vi sai ( 1baûn cöïc chung.
1 CHÖÔNG II MOÄT SOÁ KHAÙI NIEÄM CÔ BAÛN VEÀ CAO TAÀN.
Khoa Khoa Hoïc & Kyõ Thuaät Maùy Tính – Ñaïi Hoïc Baùch Khoa TP HCM 10.C.1 Chöông 10: Heä Thoáng File 10.C  Caáu truùc heä thoáng löu tröõ thöù caáp.
MOÂN HOÏC: CÔ SÔÛ KYÕ THUAÄT TRUYEÀN SOÁ LIEÄU Lôùp Kyõ thuaät vieãn thoâng Chöông 1: MAÏNG TRUYEÀN SOÁ LIEÄU VAØ CAÙC CHUAÅN HEÄ THOÁNG MÔÛ.
Chapter 91 KYÕ THUAÄT NAÂNG-VAÄN CHUYEÅN CHÖÔNG 9 BAÛO ÑAÛM AN TOAØN LAØM VIEÄC MAÙY TRUÏC.
Nöõ tu Meán Thaùnh Giaù vaø Naêm Thaùnh Ñöùc Cha Pheâroâ Nguyeãn Vaên Nhôn, trình baøy vôùi Ñöùc Thaùnh Cha Beâneâñictoâ XVI: * Tình Hình GHVN *
Chapter 14: Mass-Storage Systems Disk Structure. Disk Scheduling. RAID.
BAØI TAÄP chöông 1 1. Veõ taát caû ñoà thò a. 3 ñænh vaø 3 caïnh. b. 4 ñænh, 4 caïnh vaø khoâng coù voøng, khoâng coù caïnh //. c. lieät keâ 4 ñoà thò.
Part IV I/O System Chapter 12: Mass Storage Structure.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Chương 4 : Ño L(inductance),C(capacitance)& M (mutual inductance) 4.1- Đo L &C dùng vôn kế và ampe kế Đo L&C dùng cầu đo AC 4.3- Đo M dùng vôn kế.
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems DISK I/0.
PHAÀN 1: MOÄT SOÁ VAÁN ÑEÀ VEÀ NOÄI QUY AN TOAØN TRONG LAÉP ÑAËT- VAÄN HAØNH VAØ BAÛO TRÌ HEÄ THOÁNG KHÍ NEÙN 1.1. Nhöõng quy ñònh veà an toaøn trong laép.
Multiple Platters.
Operating System I/O System Monday, August 11, 2008.
ÑEÀ TAØI : COÂNG NGHEÄ EÙP PHUN & ÖÙNG DUÏNG PHAÀN MEÀM PRO/ENGINEER ÑEÅ THIEÁT KEÁ KHUOÂN MOÙC CAÊNG DAÂY GHEÁ.
Quaûn lyù tieán trình.
Overview Continuation from Monday (File system implementation)
2.A Quaûn lyù quaù trình Khaùi nieäm cô baûn Ñònh thôøi CPU
Chöông 2.B Thread Khaùi nieäm toång quan Caùc moâ hình multithread
Chöông 9: I/O System Thieát bò phaàn cöùng I/O
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Tiết 50 - Bài 33 ÑIEÀU CHEÁ KHÍ HIÑRO PHAÛN ÖÙNG THEÁ.
TÍNH TOAÙN NGAÉN MAÏCH TRÌNH BAØY TS. TRÖÔNG VIEÄT ANH.
Presentation transcript:

-1.1- Chöông 10: Heä Thoáng File 10.C

2 Chöông 10: Heä Thoáng File  Beân trong ñóa cöùng  Caùc giaûi thuaät ñònh thôøi truy caäp ñóa  Ñònh daïng, phaân vuøng, raw disk  RAID (Redundant Arrays of Independent Disks)

3 Toå chöùc cuûa ñóa cöùng  Ñóa cöùng trong heä thoáng PC Partition 1 Partition 2 Partition 3 Partition 4 Partition Master Boot Record (MBR) Boot Block

4 Disk Anatomy disk head array 1 – 12 platters the disk spins – around 7,200rpm track

5 Beân trong ñóa cöùng

6 Caùc tham soá cuûa ñóa  Thôøi gian ñoïc/ghi döõ lieäu treân ñóa bao goàm Seek time: thôøi gian di chuyeån ñaàu ñoïc/ghi ñeå ñònh vò ñuùng track/cylinder, phuï thuoäc toác ñoä/caùch di chuyeån cuûa ñaàu ñoïc/ghi Latency (Rotational delay): thôøi gian ñaàu ñoïc chôø ñeán ñuùng sector caàn ñoïc, phuï thuoäc toác ñoä quay cuûa ñóa Transfer time: thôøi gian chuyeån döõ lieäu töø ñóa vaøo boä nhôù hoaëc ngöôïc laïi, phuï thuoäc baêng thoâng keânh truyeàn giöõa ñóa vaø boä nhôù  Disk I/O time = seek time + rotational delay + transfer time

7 Modern disks  Modern hard drives use zoned bit recording Disks are divided into zones with more sectors on the outer zones than the inner ones (why?)

8 Addressing Disks  What the OS knows about the disk Interface type (IDE/SCSI/SATA), unit number, number of sectors  What happened to sectors, tracks, etc? Old disks were addressed by cylinder/head/sector (CHS) Modern disks are addressed using a linear addressing scheme  LBA = logical block address  As an example, LBA = ,072,367 for a 300 GB disk  Who uses sector numbers? File system software assign logical blocks to files Terminology  To disk people, “block” and “sector” are the same  To file system people, a “block” is some fixed number of sectors

9 Disk Addresses vs Scheduling  Goal of OS disk-scheduling algorithm Maintain queue of requests When disk finishes one request, give it the ‘best’ request  e.g., whichever one is closest in terms of disk geometry  Goal of disk's logical addressing Hide messy details of which sectors are located where  Oh, well Older OS's tried to understand disk layout Modern OS's just assume nearby sector numbers are close Experimental OS's try to understand disk layout again Next few slides assume “old” / “experimental”, not “modern”

10 Taêng hieäu suaát truy caäp ñóa Caùc giaûi phaùp  Giaûm kích thöôùc ñóa  Taêng toác ñoä quay cuûa ñóa  Ñònh thôøi caùc taùc vuï leân ñóa (disk scheduling)  Boá trí ghi döõ lieäu treân ñóa caùc döõ lieäu coù lieân quan naèm treân caùc track gaàn nhau interleaving  Boá trí caùc file thöôøng söû duïng vaøo vò trí thích hôïp  Choïn kích thöôùc cuûa logical block  Read ahead Speculatively read blocks of data before the application requests them (  principle of spatial locality)

11 Hieäu suaát truy caäp ñóa (1)  Ñaëc tröng veà ñaùp öùng yeâu caàu ñóa (performance metric) Thoâng naêng (throughput) – soá löôïng taùc vuï hoaøn taát trong moät ñôn vò thôøi gian Ñoä lôïi (disk utilization) – phaàn thôøi gian truyeàn döõ lieäu chieám trong disk I/O time Deadline – thôøi haïn hoaøn taát cuûa moät yeâu caàu Thôøi gian ñaùp öùng (response time) – thôøi gian töø luùc yeâu caàu ñeán luùc yeâu caàu ñöôïc phuïc vuï xong Coâng baèng (fairness)

12 Hieäu suaát truy caäp ñóa (2)  Caùc tieâu chí toái öu ‘töï nhieân’ Toái ña thoâng naêng Toái ña ñoä lôïi Toái thieåu soá löôïng caùc deadline khoâng giöõ ñöôïc Toái thieåu thôøi gian ñaùp öùng trung bình Coâng baèng vôùi moïi yeâu caàu ñóa

13 Ñònh thôøi truy caäp ñóa  YÙ töôûng chính Saép xeáp laïi thöù töï cuûa caùc yeâu caàu ñóa ñeå thoûa tieâu chí toái öu cuûa heä thoáng  Caùc giaûi thuaät ñònh thôøi truy caäp ñóa First Come, First Served (FCFS) Shortest-Seek-Time First (SSTF, SSF) SCAN C-SCAN (Circular SCAN) C-LOOK

14 First Come First Served (FCFS) Haøng ñôïi (cylinder number): 98, 183, 37, 122, 14, 124, 65, 67 Ñaàu ñoïc ñang ôû cylinder soá Toång soá cylinder ñaõ duyeät qua: 640

15 Shortest-Seek-Time First (SSTF)

16 SCAN (elevator algorithm) and is moving toward cylinder 0

17 C-SCAN (Circular SCAN) and is servicing on the way to cyl. 199

18 C-LOOK and is servicing on the way to cyl. 199

19 Ñaùnh giaù giaûi thuaät ñònh thôøi ñóa  FCFS thoûa fairness nhöng khoâng quan taâm ñeán toái ña ñoä lôïi ñóa  SSTF coù muïc tieâu laø toái ña ñoä lôïi ñóa nhưng khoâng thoûa fairness

20 Quaûn lyù ñóa: Ñònh daïng (formatting)  Ñònh daïng caáp thaáp: ñònh daïng vaät lyù, chia ñóa thaønh nhieàu sector Moãi sector coù caáu truùc döõ lieäu ñaëc bieät: header – data – trailer Header vaø trailer chöùa caùc thoâng tin daønh rieâng cho disk controller nhö chæ soá sector vaø error-correcting code (ECC) Khi controller ghi döõ lieäu leân moät sector, tröôøng ECC ñöôïc caäp nhaät vôùi giaù trò ñöôïc tính döïa treân döõ lieäu ñöôïc ghi Khi ñoïc sector, giaù trò ECC cuûa döõ lieäu ñöôïc tính laïi vaø so saùnh vôùi trò ECC ñaõ löu ñeå kieåm tra tính ñuùng ñaén cuûa döõ lieäu HeaderDataTrailer

21 Quaûn lyù ñóa: Phaân vuøng (partitioning)  Phaân vuøng: chia ñóa thaønh nhieàu vuøng (partition), moãi vuøng laø moät chuoãi block lieân tuïc Moãi partition ñöôïc xem nhö moät “ñóa luaän lyù” rieâng bieät  Ñònh daïng luaän lyù cho partition: taïo moät heä thoáng file (FAT, ext2, NTFS…), bao goàm Löu caùc caáu truùc döõ lieäu khôûi ñaàu cuûa heä thoáng file leân partition Taïo caáu truùc döõ lieäu quaûn lyù khoâng gian troáng vaø khoâng gian ñaõ caáp phaùt (DOS: FAT; UNIX: superblock vaø i-node list)

22 Ví duï ñònh daïng moät partition MBR

23 Quaûn lyù ñóa: Raw disk  Raw disk: partition khoâng coù heä thoáng file  I/O leân raw disk ñöôïc goïi laø raw I/O ñoïc hay ghi tröïc tieáp caùc block khoâng duøng caùc dòch vuï cuûa file system (buffer cache, file locking, prefetching, caáp phaùt khoâng gian troáng, ñònh danh file, vaø thö muïc)  Ví duï Moät soá heä thoáng cô sôû döõ lieäu choïn duøng raw disk ñeå coù hieäu suaát ñóa cao hôn

24 Quaûn lyù khoâng gian traùo ñoåi (swap space)  Swap space khoâng gian ñóa ñöôïc söû duïng ñeå môû roäng khoâng gian ñòa chæ aûo trong kyõ thuaät boä nhôù aûo Muïc tieâu quaûn lyù: hieäu suaát cao cho heä thoáng quaûn lyù boä nhôù aûo (vd demand paging) Hieän thöïc  chieám partition rieâng, vd swap partition cuûa Linux  hoaëc qua moät file system, vd file pagefile.sys cuûa Windows  Thöôøng keøm theo caching hoaëc duøng phöông phaùp caáp phaùt lieân tuïc

25 SSD (Solid-state drive)

26 RAID Introduction  Disks act as bottlenecks for both system performance and storage reliability  A disk array consists of several disks which are organized to increase performance and improve reliability Performance is improved through data striping Reliability is improved through redundancy  Disk arrays that combine data striping and redundancy are called Redundant Arrays of Independent Disks, or RAID  There are several RAID schemes or levels  Slide from CMPT 354

27 Data Striping  A disk array gives the user the abstraction of a single, large, disk When an I/O request is issued, the physical disk blocks to be retrieved have to be identified How the data is distributed over the disks in the array affects how many disks are involved in an I/O request  Data is divided into equal size partitions called striping units The size of the striping unit varies by the RAID level  The striping units are distributed over the disks using a round robin algorithm KEY POINT – disks can be read in parallel, increasing the transfer rate

28 Striping Units – Block Striping  Assume that a file is to be distributed across a 4 disk RAID system and that Purely for the sake of illustration, blocks are only one byte! [here striping-unit size = block size] … … … … … Notional File – a series of bits, numbered so that we can distinguish them Now distribute these bits across the 4 RAID disks using BLOCK striping:

29 Striping Units – Bit Striping  Now here is the same file, and 4 disk RAID using bit striping, and again: Purely for the sake of illustration, blocks are only one byte! … … … … … Notional File – a series of bits, numbered so that we can distinguish them Now distribute these bits across the 4 RAID disks using BIT striping:

30 Striping Units Performance  A RAID system with D disks can read data up to D times faster than a single disk system As the D disks can be read in parallel For large reads* there is no difference between bit striping and block striping  *where some multiple of D blocks are to be read Block striping is more efficient for many unrelated requests  With bit striping all D disks have to be read to recreate a single block of the data file  In block striping each disk can satisfy one of the requests, assuming that the blocks to be read are on different disks  Write performance is similar but is also affected by the parity scheme

31 Reliability of Disk Arrays  The mean-time-to-failure (MTTF) of a hard disk is around 50,000 hours, or 5.7 years  In a disk array the MTTF (of a single disk in the array) increases Because the number of disks is greater  The MTTF of a disk array containing 100 disks is 21 days (= 50,000/100 hours) Assuming that failures occur independently and The failure probability does not change over time Pretty implausible assumptions  Reliability is improved by storing redundant data

32 Redundancy  Reliability of a disk array can be improved by storing redundant data  If a disk fails, the redundant data can be used to reconstruct the data lost on the failed disk The data can either be stored on a separate check disk or Distributed uniformly over all the disks  Redundant data is typically stored using a parity scheme There are other redundancy schemes that provide greater reliability

33 Parity Scheme  For each bit on the data disks there is a related parity bit on a check disk If the sum of the bits on the data disks is even the parity bit is set to zero If the sum of the bits is odd the parity bit is set to one  The data on any one failed disk can be recreated bit by bit

… … … … … Here is a fifth CHECK DISK with the parity data Here is the 4 disk RAID system showing the actual bit values

35 Parity Scheme and Reliability  In RAID systems the disk array is partitioned into reliability groups A reliability group consists of a set of data disks and a set of check disks The number of check disks depends on the reliability level that is selected  Given a RAID system with 100 disks and an additional 10 check disks the MTTF can be increased from 21 days to 250 years!

36 RAID Level 0: Nonredundant  Uses data striping to increase the transfer rate Good read performance  Up to D times the speed of a single disk  No redundant data is recorded The best write performance as redundant data does not have to be recorded The lowest cost RAID level but Reliability is a problem, as the MTTF increases linearly with the number of disks in the array  With 5 data disks, only 5 disks are required

37 Block 1 Block 21 Block 6 Block 16 Block 11 Block 2 Block 22 Block 7 Block 17 Block 12 Block 3 Block 23 Block 8 Block 18 Block 13 Block 4 Block 24 Block 9 Block 19 Block 14 Block 5 Block 25 Block 10 Block 20 Block 15 Disk 0Disk 1Disk 2Disk 3Disk 4

38 RAID Level 1: Mirrored  For each disk in the system an identical copy is kept, hence the term mirroring No data striping, but parallel reads of the duplicate disks can be made, otherwise read performance is similar to a single disk  Very reliable but the most expensive RAID level Poor write performance as the duplicate disk has to be written to  These writes should not be performed simultaneously in case there is a global system failure  With 4 data disks, 8 disks are required

39 Block 1 Block 5 Block 2 Block 4 Block 3 Block 1 Block 5 Block 2 Block 4 Block 3 Disk 0 Disk 1

40 RAID Level 2: Memory-Style ECC  Not common because redundancy schemes such as bit- interleaved parity provide similar reliability at better performance and cost.

41 RAID Level 3: Bit-Interleaved Parity  Uses bit striping Good read performance for large requests  Up to D times the speed of a single disk  Poor read performance for multiple small requests  Uses a single check disk with parity information Disk controllers can easily determine which disk has failed, so the check disks are not required to perform this task Writing requires a read-modify-write cycle  Read D blocks, modify in main memory, write D + C blocks

42 Bit 1 Bit 129 Bit 33 Bit 97 Bit 65 Bit 2 Bit 130 Bit 34 Bit 98 Bit 66 Bit 3 Bit 131 Bit 35 Bit 99 Bit 67 P 1-32 P P P P Disk 0Disk 1Disk 2 Parity disk …

43 RAID Level 4: Block-Interleaved Parity  Block-interleaved, parity disk array is similar to the bit- interleaved, parity disk array except that data is interleaved across disks in blocks of arbitrary size rather than in bits

44 RAID Level 5: Block-Interleaved Distributed Parity  Uses block striping Good read performance for large requests  Up to D times the speed of a single disk  Good read performance for multiple small requests that can involve all disks in the scheme  Distributes parity information over all of the disks Writing requires a read-modify-write cycle  But several write requests can be processed in parallel as the bottleneck of a single check disk has been removed  Best performance for small and large reads and large writes  With 4 disks of data, 5 disks are required with the parity information distributed across all disks

45  Each square corresponds to a stripe unit. Each column of squares corresponds to a disk.  P0 computes the parity over stripe units 0, 1, 2 and 3; P1 computes parity over stripe units 4, 5, 6 and 7; etc. Disk 0 … Disk 4