FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i. 29.8.2011.

Slides:



Advertisements
Similar presentations
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
, Prague JAN ŠVEC Institute of Physics AS CR.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
Romanian Tier-2 Federation One site for all: RO-07-NIPNE Mihai Ciubancan on behalf of IT Department.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE Meeting, LBL Berkeley (USA) LHCONE Application Pierre Auger Observatory 1-2 June.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
News from Alberto et al. Fibers document separated from the rest of the computing resources
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Computing Jiří Chudoba Institute of Physics, CAS.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
The Beijing Tier 2: status and plans
Prague TIER2 Site Report
CERN Data Centre ‘Building 513 on the Meyrin Site’
Western Analysis Facility
Kolkata Tier ALICE and Status Site Name :-
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i

FZU Computing Centre Computing center is an independent part of the Department of Networking and Computing Techniques. Members of the team lead by J. Chudoba from the Department of Experimental Particle Physics: T. Kouba, J. Švec, J. Uhlířová, M. Eliáš, J. Kundrát. Operation is strongly supported by some members of the Department of detector development and data processing lead by M. Lokajíček: J. Horký, L. Fiala.

FZU Computing Centre HEP experiments  D0 experiment, Fermilab (USA)  LHC experiments ATLAS, ALICE (CERN)  STAR experiment, BNL (USA) Solid state physics Astroparticle physics  Pierre Auger Observatory (Argentina)

FZU Computing Centre

History: HP LP1000  2x 1.3 GHz Pentium 3  1 GB RAM First 1 TB of disks “Terminal room”

History: 2004 A real data center 200 kVA UPS  380 kVA diesel 2x 56 kW CRACs 67 HP DL140  3.06 GHz Prescotts 10 TB disk storage

History:  36 HP BL35p  6 HP BL20p 2007  18 HP BL460c  12 HP BL465c

History: 2006 – 2007 (cont.) 3 HP DL360 for Xen  FC infrastructure HP EVA  70+ TB usable capacity, FATA drives  Disk images for Xen machines  Rest used as DPM  Warranty ended in Dec 2010 A new box cheaper than warranty extension

History: x IBM iDataPlex node dx340  2x Xeon E5440 => 8 cores 20x Altix XE 310 twins (40 hosts)  2x Xeon E5420 => 8 cores 3x Overland Ultamus 4800 (48TB raw each) SAN Tape library NEO x VIA based NFS storage First decommissioning of computing nodes  2002's HP LP1000r

History: x IBM iDataPlex dx360M2 nodes  2x Xeon E5520 with HT => 16 Cores 9x Altix XE 340 twins (18 hosts)  2x Xeon E5520 without HT => 8 cores All water cooled 3x Nexsan SataBeast (84TB raw each) SAN 3x Atom-based storage nodes  Wrong idea, WD15EADS-00P8B0 are just weird

History: 2009 SGI Altix ICE 8200  Solid state physics  512 cores (128 x E G)  1TB RAM  Infiniband  6TB disk array (Infiniband)  Torque/Maui, OpenMPI

Cooling : New water cooling infrastructure  STULZ CLO 781A 2x 88 kW Additional ceiling for hot-cold air separation

History: x IBM iDataPlex dx360M3 nodes  2x Xeon X5650 with HT => 24 Cores Bull Optima 1500 – HP EVA replacement 8x Supermicro SC847E E16-RJBOD  DPM pool nodes  1,3PB Second decommissioning of computing nodes  2004's HP DL140  8 IBM dx340 swapped for dx360M3

Network facilities External connectivity delivered by CESNET, a GEANT stakeholder  10Gb to public Internet  1Gb to FZK (Karlsruhe)  10Gb connection demuxed to several dedicated 1Gb lines FNAL, BNL, ASGC, Czech Tier-3s,... 10G internal network  Few machines still on 1G switches → LACP

2011: Current Procurements Estimates: – Worker nodes, approx HEPSPEC06 – 800TB disk storage – Water cooled rack + water cooled back doors – Another NFS fileserver

Overall performance growth

Current numbers 275 WNs HEP, 76 WNs solid state physics 3000 cores HEP, 560 cores solid state physics Torque & Maui 2PB on disk servers (DPM or NFSv3)

Q & A Questions?