CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste1 Strategy and present hardware Combine scattered Italian institutions.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Incontri di INFNET January 19, 1999 Impatto del Run II di CDF Stefano Belforte - INFN Pisa1 Reqs from CDF Run II on INFN computing infrastructures Usual.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
10 May 2002Report & Plans on computers1 Status Report for CDF Italy computing fcdfsgi2 disks Budget report CAF: now, next year GRID etc. CDF – Italy meeting.
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Sep Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
22/04/2005Donatella Lucchesi1 CDFII Computing Status OUTLINE:  New CDF-Italy computing group organization  Usage status at FNAL and CNAF  Towards GRID:
Database Infrastructure Major Current Projects –CDF Connection Metering, codegen rewrite, hep w/ TRGSim++ – Dennis –CDF DB Client Monitor Server and MySQL.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Status of PRS/Muon Activities D. Acosta University of Florida Work toward the “June” HLT milestones US analysis environment.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd Nov 2000HEPiX/HEPNT CDF-UK MINI-GRID Ian McArthur Oxford University, Physics Department
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
08/30/05GDM Project Presentation Lower Storage Summary of activity on 8/30/2005.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
One of the more common problems Suppliers can experience is having missing or stuck documents in Caplink. Suppliers usually become aware of the issue.
3 Apr 2002Stefano Belforte – INFN Trieste Necessita’ CDF al Tier11 CDF needs at Tier 1 Many details in slides for (your) future reference Will move faster.
Febryary 10, 1999Stefano Belforte - INFN Trieste1 CDF Run II Computing Workshop. A user’s perspective Stefano Belforte INFN - Trieste.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
18-sep-021 Computing for CDF Status and Requests for 2003 Stefano Belforte INFN – Trieste.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
TSS Database Inventory. CIRA has… Received and imported the 2002 and 2018 modeling data Decided to initially store only IMPROVE site-specific data Decided.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
12-nov-021 CDF Computing Politics and Technicalities Stefano Belforte INFN – Trieste.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
LCG WLCG Accounting: Update, Issues, and Plans John Gordon RAL Management Board, 19 December 2006.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
1 LPC Data Organization Robert M. Harris Fermilab With contributions from Jon Bakken, Lothar Bauerdick, Ian Fisk, Dan Green, Dave Mason and Chris Tully.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
1 November 17, 2005 FTS integration at CMS Stefano Belforte - INFN-Trieste CMS Computing Integration Coordinator CMS/LCG integration Task Force leader.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
2 CMS 6 PB raw/run Phobos 50 TB/run E917 5 TB/run.
Vanderbilt Tier 2 Project
SAM at CCIN2P3 configuration issues
Universita’ di Torino and INFN – Torino
Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary
Presentation transcript:

CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste1 Strategy and present hardware Combine scattered Italian institutions to minimise effort, avoid (for the moment) an “Italian National CDF computer center”  PAD based analysis at Fermilab  Copy Ntuple to Italy  Locate at FCC the resources for all work on 2 ndary data sets (disk, CPU, user’s space). Use those from Italy. So far: DISK (~4TB, mostly not there yet)  20% users 30% common good (RH tax) 50% data  Data shared with collboration (e.g. top  6j, B   ) Trailers for transients  Only desktops (N_desks ~ N_cpu << N_physicists)

CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste2 Data Access and Copy 1 st phase: copy Ntuple from FNAL to Italy  Need capability to archive (in DFC?) locally on tape 2 ndary and 3 rtiary Data Sets, and non-AC++ data formats  Present copy tools (kerberised ftp/rcp) OK  Bandwidth FNAL  ITALY: 50Mbit/sec 2 nd phase: export 2 ndary data sets to Italy  tape )  disk ) data caching on WAN  reverse also desirable  GRID like tools  Bandwidth FNAL  ITALY : 300Mbits/sec  from CDF 5787 (a bit scaled up): 10MByte/sec/user x 20 user x 20% cache miss = 40MByte/sec  Add Italy  FNAL: 500Mbits/sec by ~2005

CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste3 Problems, Difficulties Working outside the gate:  Documentation: where are the data  Documentation: what exactely was done to them  Documentation: what has changed in version N.n.n ?  Documentation: what is correct tcl file to use  help via helps a lot, but only arrives they day after  Will we ever do without “a friend inside” ?? Debugger:  Still rely a lot on TotalView, but graphics from Italy is s o s l o w Getting the money:  Why does your code run so slowly ?  How can we be sure you will not be a BaBar x 10 ?

CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste4 Italy vs. FKW INFN gave very good feedback for new CAF:  Commodity hardware  move toward GRID Next (2002~3): farmlets at FCC  One “farmlet” for every (few) analsyis (money for 4 Monday?)  disk also for 3 triary Data Sets, and/or other data formats  CPU accessible to everybody with “priorities”  We need new farm to work, need features, will like to reuse at home  Work on batch system (  FNAL x 3 months) Future (2003~4 ? Even before Run2b!): GRID  Pull the farmlet to Italy  Exploit future INFN Tier1, symmetric access to/from FNAL, hopefully still available to everybody in CDF, much easier to have significantly more hardware  Only possible because other paving the way (DO, UK, Spain …)