ALICE T2 / FAIR Scope of the Meeting:

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
GridPP News NeSC opening “Media” dissemination Tier 1/A hardware Web pages Collaboration meetings Nick Brook University of Bristol.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Computer Cluster at UTFSM Yuri Ivanov, Jorge Valencia.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Future Cluster Andrew Krioukov Prashanth Mohan. Sun Box.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
October 2010CERN-Korea J. Schukraft1 ALICE Status 8 th CERN-Korea meeting Collaboration News Korean participation General ALICE status report on Wednesday.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
PROOF Farm preparation for Atlas FDR-1 Wensheng Deng, Tadashi Maeno, Sergey Panitkin, Robert Petkus, Ofer Rind, Torre Wenaus, Shuwei Ye BNL.
News from Alberto et al. Fibers document separated from the rest of the computing resources
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
RTTC rack status Apr. 14 th 2005 Loïc Brarda, CERN.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Dynamic Extension of the INFN Tier-1 on external resources
The impact on a business of higher interest rates
ICEPP, University of Tokyo
A Dutch LHC Tier-1 Facility
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
Andrea Chierici On behalf of INFN-T1 staff
LCG Deployment in Japan
Western Analysis Facility
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Update on Plan for KISTI-GSDC
Clouds of JINR, University of Sofia and INRNE Join Together
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Oxford Site Report HEPSYSMAN
Castor services at the Tier-0
RDIG for ALICE today and in future
UTFSM computer cluster
News and computing activities at CC-IN2P3
Simulation use cases for T2 in ALICE
Competences & Plans at GSI
Transition to a new CPU benchmark
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
CS 140 Lecture Notes: Technology and Operating Systems
CS 140 Lecture Notes: Technology and Operating Systems
Support for ”interactive batch”
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

ALICE T2 / FAIR Scope of the Meeting: Summarize where we are, what we know, what we believe, ... What is the timeline? What is needed when? Establish a clear procedure to come to a decision within 2-3 weeks, how to spent approx. 240k€. Unfortunately we have today only 2 hours.

GSI ALICE T2/T3 2008/2009 990k€ FAIR Invest 2010 ff: 300k€ per year Time CPU/ kSI2k 2007 2008 2009 2010 pledged T2 260 660 860 1100 planned T2/T3 400 1000 1300 1700 BMBF T2 project 2500 2860 Disk/TB 2007 2008 2009 2010 pledged T2 80 200 260 340 planned T2/T3 120 300 390 510 BMBF T2 project 800

CPU/ kSI2k 2007 2008 2009 2010 pledged T2 260 660 860 1100 planned T2/T3 400 1000 1300 1700 BMBF T2 project 2500 2860 end 2006 15 2*2core 2.67GHz Xeon 4*750GB disk D-Grid mid 2007 24 2*4core 2.67GHz Xeon 4*500GB disk GSI end 2007 54 2*4core 2.67GHz Xeon 4*500GB disk D-Grid mid 2008 112 2*4core 2.67GHz Xeon only system disk BMBF/ALICE T2 24 + 54 + 112 = 190 boxes ~ 1500 cores

Disk/TB 2007 2008 2009 2010 pledged T2 80 200 260 340 planned T2/T3 120 300 390 510 BMBF T2 project 800 mid 2007 20 * 6TB netto 120 TB GSI end 2007 35 * 6TB netto 210 TB D-Grid beg 2009 call f. tender 500 TB netto BMBF/ALICE T2 ~ 800 TB

w We want to provide a mixture of a Tier 2 integrated in the AliEn Grid a Tier 3 including a PROOF farm integrated in the standard GSI batch farm (GSI, FAIR) We want to be able to readjust the relative size of the different parts on request. What we need to decide today? w Can we decide today? Do we need additional information? How much time do we have? Shall we go for local disks or not? Do we want to split the money? Is there synergy with FAIR? Should we buy s.th. useful for FAIR now and hope to get s.th. back later?... 990 k€ from BMBF for 2008/9 -300k€ BG 2 infrastructure -200k€ Blades (112 * 8 cores) - 40k€ Racks -10k€ Network equipment 240k€ rest for CPUs with or without local disk