Download presentation
Presentation is loading. Please wait.
Published byDamon King Modified over 9 years ago
1
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i. 29.8.2011
2
FZU Computing Centre Computing center is an independent part of the Department of Networking and Computing Techniques. Members of the team lead by J. Chudoba from the Department of Experimental Particle Physics: T. Kouba, J. Švec, J. Uhlířová, M. Eliáš, J. Kundrát. Operation is strongly supported by some members of the Department of detector development and data processing lead by M. Lokajíček: J. Horký, L. Fiala.
3
FZU Computing Centre HEP experiments D0 experiment, Fermilab (USA) LHC experiments ATLAS, ALICE (CERN) STAR experiment, BNL (USA) Solid state physics Astroparticle physics Pierre Auger Observatory (Argentina)
4
FZU Computing Centre
5
History: 2002 34 HP LP1000 2x 1.3 GHz Pentium 3 1 GB RAM First 1 TB of disks “Terminal room”
6
History: 2004 A real data center 200 kVA UPS 380 kVA diesel 2x 56 kW CRACs 67 HP DL140 3.06 GHz Prescotts 10 TB disk storage
7
History: 2006 - 2007 2006 36 HP BL35p 6 HP BL20p 2007 18 HP BL460c 12 HP BL465c
8
History: 2006 – 2007 (cont.) 3 HP DL360 for Xen FC infrastructure HP EVA 70+ TB usable capacity, FATA drives Disk images for Xen machines Rest used as DPM Warranty ended in Dec 2010 A new box cheaper than warranty extension
9
History: 2008 84x IBM iDataPlex node dx340 2x Xeon E5440 => 8 cores 20x Altix XE 310 twins (40 hosts) 2x Xeon E5420 => 8 cores 3x Overland Ultamus 4800 (48TB raw each) SAN Tape library NEO 8000 2x VIA based NFS storage First decommissioning of computing nodes 2002's HP LP1000r
10
History: 2009 65x IBM iDataPlex dx360M2 nodes 2x Xeon E5520 with HT => 16 Cores 9x Altix XE 340 twins (18 hosts) 2x Xeon E5520 without HT => 8 cores All water cooled 3x Nexsan SataBeast (84TB raw each) SAN 3x Atom-based storage nodes Wrong idea, WD15EADS-00P8B0 are just weird
11
History: 2009 SGI Altix ICE 8200 Solid state physics 512 cores (128 x E5420 2.5G) 1TB RAM Infiniband 6TB disk array (Infiniband) Torque/Maui, OpenMPI
12
Cooling 2009 2009: New water cooling infrastructure STULZ CLO 781A 2x 88 kW Additional ceiling for hot-cold air separation
13
History: 2010 26x IBM iDataPlex dx360M3 nodes 2x Xeon X5650 with HT => 24 Cores Bull Optima 1500 – HP EVA replacement 8x Supermicro SC847E16 + 847E16-RJBOD DPM pool nodes 1,3PB Second decommissioning of computing nodes 2004's HP DL140 8 IBM dx340 swapped for dx360M3
14
Network facilities External connectivity delivered by CESNET, a GEANT stakeholder 10Gb to public Internet 1Gb to FZK (Karlsruhe) 10Gb connection demuxed to several dedicated 1Gb lines FNAL, BNL, ASGC, Czech Tier-3s,... 10G internal network Few machines still on 1G switches → LACP
15
2011: Current Procurements Estimates: – Worker nodes, approx. 5500 HEPSPEC06 – 800TB disk storage – Water cooled rack + water cooled back doors – Another NFS fileserver
16
Overall performance growth
17
Current numbers 275 WNs HEP, 76 WNs solid state physics 3000 cores HEP, 560 cores solid state physics Torque & Maui 2PB on disk servers (DPM or NFSv3)
18
Q & A Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.