Download presentation
Presentation is loading. Please wait.
Published byMarlene Matthews Modified over 9 years ago
1
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess
2
Review of Requirements Data flow – 53 Hz @ 250 Kbytes / event = 13 Mbytes/sec –Data spooled to local disk, then transferred to FCC for storage implies > 25 Mbytes/sec local storage access –Local buffering of at least 1 hour, plus reasonable unloading of buffer implies ~ 100 Gbytes local storage ~ 35 Mbytes/sec local storage access
3
Requirements cont’d. Additional data production –Calibration data –Run monitoring databases additional ~ 2 Mbytes/sec Data monitoring –Desire ~10Hz to monitor(s) ~ 5 Mbytes/sec distribution –Processing power for monitors Look for cheap PC solution Detector control & monitoring –Bandwidths not yet well understood, expected ~ Mbyte/sec –Supplied by workstations or PCs
4
Monitor Nodes Host Level 3 Interface MPMMPM Buffer Disk Data Cable Control Room Nodes Front End Nodes Network Interconnect FCC Schematic Configuration 13 Mbyte/s 100 GB 5 Mbyte/s 15 Mbyte/s
5
System Configuration Design goal of >99% uptime Redundant systems, each capable of (nearly?) full bandwidth Multiple paths to critical disks DØ and Fermi product disks NOT data buffer disks Existing technologies: –UNIX ‘cluster’ for redundant disk access (though probably do not need failover capabilities) –Networks: FDDI<10 MByte/s 100 Mbit Ethernet~5 MByte/s –I/O: Fast Wide SCSI~20 MByte/s Ultra Wide SCSI~40 MByte/s
6
System Configuration cont’d. Problems: –Not easy to get required bandwidths now network in from L3, out to FCC I/O to data buffer disks Relying on future technologies: –Gigabit Ethernet –Fibre Channel - Arbitrated Loop (FC-AL) –FCC tape archive / HSM / robotics Fallback position: –Require multiple servers / network / I/O paths –Multiple I/O controllers per server –Conventional tape mounting
7
Online Configuration Router StorageWorks RAID Array 100 Mb Switch 10 Mb Switch Monitor Node AlphaStation Host A AlphaServer Level 3 Interface MPMMPM FC Buffer Disk 100 Mb Host B AlphaServer Level 3 Interface MPMMPM FC Buffer Disk 100 Mb Monitor AlphaServer Level 3 Interface MPMMPM FC Buffer Disk 100 Mb Shared SCSI FC FCC Data Cable RAID Disk Router Monitor Node AlphaStation Monitor Node NT PC Monitor Node Linux PC Control Room Nodes: NT PC Front End Nodes FC-AL Memory Channel
8
Purchases FY97 purchases: –Server hardware: DEC AlphaServer 4000, 5/466, 512MB 16 PCI slots 3-port UWSE SCSI RAID controller (local buffer disk) FWSE SCSI controller (local system disk) FWD SCSI controller (shared SCSI) FDDI, 100 Mb Ethernet –StorageWorks hardware: Rack Controller crate, 3 disk crates 6-channel RAID controller –Server software: Digital UNIX (unlimited license) DEC C,C++, FORTRAN compilers –Other: 144 GB local disk 3 WinNT PCs (1 to also have Linux) Network switches / hubs
9
1/1/98 Configuration 100 Mb Switch 100/10 Mb Hub Monitor Node Alpha 3000/300 AlphaServer 4000 Level 3 Interface MPMMPM Buffer Disk 100 Mb Data Cable FDDI Concentrator Monitor Node NT/ Linux PC Control Room Nodes: NT PC Front End Nodes SCSI FDDI StorageWorks RAID Array SCSI RAID Disk
10
Purchases... Early FY98 purchases: –KAI C++ compiler –convert old Alphas from VMS to UNIX –ORACLE? –Fibre Channel controller, disk chassis, and disks Future upgrade paths –Additional servers –Memory Channel interconnect, TruCluster software –AlphaServer 4000 can have 2 CPU, 4 GB memory –Add 2 nd RAID controller to shared disk –Replace SCSI with FC-AL for buffer storage FC RAID controllers StorageWorks crate backplanes can be modified Disks (dual port FC) –FC network
11
Over the horizon... DEC future products: –AlphaServer CPU up to 600 MHz by May ‘98 –Fibre Channel SCSI hardware late ‘97, UNIX drivers Spring ‘98 (but 3 rd party GENROCO controller available now) –No FC networking plans, relying on FDDI with GigaSwitch (but available with GENROCO board/driver) –10,000 RPM drives support soon; unclear when 23 GB drives supported In general… –FC-AL will be the SCSI solution –Storage will appear as network object
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.