May 9 2006Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.

Slides:



Advertisements
Similar presentations
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Proposal for dCache based Analysis Disk Pool for CDF presented by Doug Benjamin Duke University on behalf of the CDF Offline Group.
Sep Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
22/04/2005Donatella Lucchesi1 CDFII Computing Status OUTLINE:  New CDF-Italy computing group organization  Usage status at FNAL and CNAF  Towards GRID:
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CDF perspectives and Computing needs KISTI, Dec 5 th 2008 DongHee Kim.
History of the National INFN Pool P. Mazzanti, F. Semeria INFN – Bologna (Italy) European Condor Week 2006 Milan, 29-Jun-2006.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
LcgCAF:CDF submission portal to LCG Federica Fanzago for CDF-Italian Computing Group Gabriele Compostella, Francesco Delli Paoli, Donatella Lucchesi, Daniel.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Distribution After Release Tool Natalia Ratnikova.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
CDF Grid at KISTI 정민호, 조기현 *, 김현우, 김동희 1, 양유철 1, 서준석 1, 공대정 1, 김지은 1, 장성현 1, 칸 아딜 1, 김수봉 2, 이재승 2, 이영장 2, 문창성 2, 정지은 2, 유인태 3, 임 규빈 3, 주경광 4, 김현수 5, 오영도.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Offline Operations Aidan Robson (for the offline group) CDF Weekly, 6 August 2010 Calibrations: Calibration tables are being finalised for p30 (13 April.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CRAB: the CMS tool to allow data analysis.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
Upgrade Software University and INFN Catania Upgrade Software Alessia Tricomi University and INFN Catania CMS Trigger Workshop CERN, 23 July 2009.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
April 18, 2006FermiGrid Project1 FermiGrid Project Status April 18, 2006 Keith Chadwick.
CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste1 Strategy and present hardware Combine scattered Italian institutions.
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
5/12/06T.Kurca - D0 Meeting FNAL1 p20 Reprocessing Introduction Computing Resources Architecture Operational Model Technical Issues Operational Issues.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
CDF Monte Carlo Production on LCG GRID via LcgCAF Authors: Gabriele Compostella Donatella Lucchesi Simone Pagan Griso Igor SFiligoi 3 rd IEEE International.
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
IFAE Apr CDF Computing Experience - Gabriele Compostella1 IFAE Apr CDF Computing Experience Gabriele Compostella, University.
LcgCAF:CDF submission portal to LCG
SuperB and its computing requirements
LHCb Software & Computing Status
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Proposal for the LHCb Italian Tier-2
Universita’ di Torino and INFN – Torino
R. Graciani for LHCb Mumbay, Feb 2006
DØ MC and Data Processing on the Grid
Data Processing for CDF Computing
The LHCb Computing Data Challenge DC06
Presentation transcript:

May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova

Donatella Lucchesi2 May Introduction CDF has to reconstruct and to analyze the largest data samples ! How does CDF cope with the increasing data and dedicated resource and manpower reduction?

Donatella Lucchesi3 May CDF Code and Data Production Status Reconstruction code: stable and well performing. Plan for final production release in weeks! want to reach 4 Production farm is just an other CAF, it can be used for production ntuples for the whole collaboration

Donatella Lucchesi4 May CDF User Resources not enough There is always something in some queue  more resources needed!

Donatella Lucchesi5 May How the CDF Resources are used Fermilab mostly analysis oriented but also some Monte Carlo Offsite used for Monte Carlo

Donatella Lucchesi6 May CNAF is special case Some data are there and… …gives ~50% of offsite resource

Donatella Lucchesi7 May CDF usage of CNAF Resources Some data analysis has been done but largely used for Monte Carlo Reminder: CNAF is a GlideCAF, opportunistic use of all resources (Other GlideCAF: Lyon, San Diego, MIT and FermiGrid) S. Sarkar & I. Sfiligoi developed

Donatella Lucchesi8 May CDF resources usage respect to CNAF Total cpu time since April 2005 Total cpu time April 2006

Donatella Lucchesi9 May CDF Resources Needs Proportional Model: scale FY2005 resources FY200x demand = (FY2005 resources)x(FY200x/FY2005)data vol. Input parameters: Results:

Donatella Lucchesi10 May CDF Resources Requests Hypothesis

Donatella Lucchesi11 May CDF Resources Requests (IFC) 12 Lower bound

Donatella Lucchesi12 May Where CDF can find new Resources  Working on strategies to improve resources utilization:  One-pass production  Centralized ntuple production  Run dependent Monte Carlo production  Access GRID Resources  Merge several GlideCAF into one: Transversal firewall, scalability issues under study Fermilab - San Diego CMS group project  lcgcaf: use gLite resource broker same cdf user interface but code re-written MC produced by power users up to now CDF-Italy and INFNGrid project

Donatella Lucchesi13 May Lcgcaf design Developers: F. Delli Paoli, D. Jeans, S. Sarkar and I. Sfiligoi advices

Donatella Lucchesi14 May Lcgcaf status  Resources available in Italy anything but T1: 617KSI2K Compare to T1: 1800 KSI2K  Usage up to now: Monte Carlo production for the B group started Sites used: Tier1 and Padova Stored for the moment at CNAF on Storage Element (SE) Plan for an automatic storage at FNAL GRID Efficiency: 97% Monte Carlo data can be produced outside CNAF

Donatella Lucchesi15 May Need for Physics Center Disk space: - BCHARM dataset and skimmed data are on disk - size of these datasets is scaled using luminosity, logging rate and event size What we need for B Data

Donatella Lucchesi16 May Need for Physics Center CDF has some disk space assigned at CNAF. Half of the already assigned resources are devoted to B data we ask for what is missing. Currently: 65 TB

Donatella Lucchesi17 May Need for Physics Center CPUs Total Tier1 from Mirco talk at CN1 6/7 Feb CDF is not asking more than what already discussed and approved one year ago

Donatella Lucchesi18 May Manpower Fermilab: I. Sfiligoi, CDF-GRID and dCAF coordinator need a replacement San Diego group (CAF maintenance) need to be replaced R. Borgatti (SAM support at fnal) Contract at fnal? Italy: D. Lucchesi (coordination, SAM in Italy, data import) S. Sarkar (CNAF management, GlideCAF, disks, Tier1 Interaction) ends in July 06, new AR requested. F. Delli Paoli (lcgcaf management, CDF code maintenance) ends in September 06 D. Jeans (GlideCaf-disk-user interface) ends in March 07 L. Brigliadori (support CNAF management) end January 07

Donatella Lucchesi19 May BACKUP

Donatella Lucchesi20 May CDF Data recording Integrated Luminosity Per detector by Oct 09 Peak inst. lum record 1.8  cm -2 s -1 total delivered: 1.6 fb -1 /expt total recorded: 1.3 fb -1 /expt

Donatella Lucchesi21 May Lower bound Resources requests

Donatella Lucchesi22 May Lower bound Resources requests