Linux IDE Disk Servers Andrew Sansum 8 March 2000
Acknowledgements Big thanks to Fabien Collin (CERN)
Things to look for in a disk server Reliability(H/W + S/W) Crash Recovery (Journeled Filesystem) Performance Manageability (eg for Compaq/True64) –Filesystems (ADVFS is very friendly) –Disk tools (SCU is good low level utility) –Monitoring (SCSI diagnostics) MAJOR MOTIVATING FACTORCost - MAJOR MOTIVATING FACTOR
What I won’t/will tell you What I’m not going to tell you –SCSI Is better than IDE –Linux is better than Solaris –PC hardware is better than DEC –HW RAID is better than S/W raid What I will tell you –New test setup meets many of our requirements –Will move server into production shortly
Benchmark Benchmark was principally done using Bonnie: –Sequential Benchmark –Character by Character read/write –4K Block read/write Multiple threads tested by running multiple instances - some problems. Results consistent with st.d (another benchmark)
Existing Systems To put the new service into context - have a quick look at existing systems. –DEC 3000 Model Seagate Barracuda 4 on Normal Single ended SCSI –DEC AlphaServer rpm Fujitsu on Ultra Wide SCSI – DEC 1000+Sweet Valley H/W RAID –Sun Ultra 10 + Sun A1000 H/W RAID
Existing Systems
IDE Disk Server Configuration Dual PIII 500MHz CPUs + 256MB Memory Intel L440GX Server motherboard (onboard VGA/100Mb Network/SCSI) Single internal SCSI System Disk 4 Promise dual ATA66 controllers ATA66 cables 10 IBM 7200 rpm 35GB IDE drives (tested 20) Chieftec Jumbo File Server Case. Dual redundant PSU - Disk Cooler Trays
Operating System Redhat 6.1 Kernel Recent IDE patch to increase number of controllers and specifically the Promise Controller Software RAID patch
Problems/Considerations None of the ones quoted by suppliers –Not enough IRQs –Cannot run than four IDE controllers –No more than 2 Promise cards Power supply awkward (daisy chains) IDE Cable length (1M) is beyond the standard. Initially had CRC checks before moving to ATA 66 cables.
Filesystem Configuration SCSI –Six Drives on single onboard controller IDE –10 Drives on 5 Controllers Master/Slave Software RAID (0 or 5) with 32K chunk Ext2 Filesystem with 4K block (and span 8)
Single Disk Performance
SCSI RAID 5
IDE RAID 0 and 5
NFS Performance Not fully load tested - only 100Mbit connection NFS to single RH 6.1 client gives: –8MB/Second write –7.3MB/S read Possibility to install Gbit NIC on spare 66MHZ slot (second bus).
Is the server suitable Reliability Crash Recovery Performance Manageability Cost Solid under test No JFS yet(raiserfs?) Excellent Worse than some Good
COST Difficult to get PC Builders to make it. Eventually used a specialist company (Workstations UK). - 11K for 720GB (inc VAT) DIY is a better option –Cost of Drives - < 5K pounds –Total System - 7.5K for 720GB Next system would be 20*IBM 72GB drive (1.5TB)
Conclusions IDE drives give excellent performance With fast CPUs, Software RAID gives excellent performance. 20 drive PC can be built - although a few H/W concerns (cables mainly). Some missing functionality with Linux (JFS, drive health monitoring). 1.5TB server achievable any day now. Server meets some (but not all) needs