Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.

Slides:



Advertisements
Similar presentations
Physics with SAM-Grid Stefan Stonjek University of Oxford 6 th GridPP Meeting 30 th January 2003 Coseners House.
Advertisements

12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
10 May 2002Report & Plans on computers1 Status Report for CDF Italy computing fcdfsgi2 disks Budget report CAF: now, next year GRID etc. CDF – Italy meeting.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Secure Off Site Backup at CERN Katrine Aam Svendsen.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Sep Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
22/04/2005Donatella Lucchesi1 CDFII Computing Status OUTLINE:  New CDF-Italy computing group organization  Usage status at FNAL and CNAF  Towards GRID:
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Status of PRS/Muon Activities D. Acosta University of Florida Work toward the “June” HLT milestones US analysis environment.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
LcgCAF:CDF submission portal to LCG Federica Fanzago for CDF-Italian Computing Group Gabriele Compostella, Francesco Delli Paoli, Donatella Lucchesi, Daniel.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
3rd Nov 2000HEPiX/HEPNT CDF-UK MINI-GRID Ian McArthur Oxford University, Physics Department
SAM Job Submission What is SAM? sam submit …… Data Management Details. Conclusions. Rod Walker, 10 th May, Gridpp, Manchester.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
3 Apr 2002Stefano Belforte – INFN Trieste Necessita’ CDF al Tier11 CDF needs at Tier 1 Many details in slides for (your) future reference Will move faster.
Condor Usage at Brookhaven National Lab Alexander Withers (talk given by Tony Chan) RHIC Computing Facility Condor Week - March 15, 2005.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
18-sep-021 Computing for CDF Status and Requests for 2003 Stefano Belforte INFN – Trieste.
Condor Week 2004 The use of Condor at the CDF Analysis Farm Presented by Sfiligoi Igor on behalf of the CAF group.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste1 Strategy and present hardware Combine scattered Italian institutions.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Compute and Storage For the Farm at Jlab
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
SAM at CCIN2P3 configuration issues
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
Universita’ di Torino and INFN – Torino
Support for ”interactive batch”
Gridifying the LHCb Monte Carlo production system
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa

Tasks and Goals Goal: Transferring most of analysis performed in Italy from FNAL to CNAF Start with one real physics analysis in Italy. Debug(2003) Task Management CDFItaly-CNAF (e.g. which users enable, scheduling priorities, schedule analysis to perform) Expand to more hardware and data analysis. (2004) Export in Italy (2005?) … FNAL ITALY from last year Stefano talk

CDF Central Analysis Farm Compile/link/debug everywhere Submit from everywhere FNAL – Submission of N parallel jobs with single command – Access data from CAF disks – Access tape data via transparent cache Get job output everywhere Store small output on local scratch area for later analysis Access to scratch area from everywhere IT WORKS NOW Remote cloning: DONE! works now at Tier1 (and other 5 places) FNAL Local Data serversA pile of PC’s My Desktop My favourite Computer gateway ftp job Log out NFS rootd N jobs rootd scratch server dCache tape enstore

30 days of work in Jan/Feb (Rosso,Sidoti,Belforte) 3 2x2GHz machines (vs. 5 promised last year): 2 WorkerNodes + 1 HeadNode (batch manager) DiskServer 1TB (OK) Administration work completed: System: RH, disk partitions, users (cdfcaf, cafuser, cafmon, cdfdata) Kerberos (use FNAL.GOV KDC), Fbsng CAF software (several FNAL-specificness fixed) Not installed yet Monitor (esp. for batch system) User’s disk space for temp. storage (need more disk) is working!(Thanks to Felice)

Monitoring via FBSNG: job details

Monitoring via FBSNG: queues and nodes

Snapshot of Monitoring

Installation and Set Up CDF software (and batch system) installed on AFS AFS goes down, the farm stops. local install possible if needed, hope not Users compile their code at home Authentication using kerberos (FNAL.GOV realm) Users added to farm as queues, no sys.man. Intervention necessary, no passwords Security of headnode still less then FNAL requires should at least try to fully kerberise it keytab files on headnode permit access to FNAL machines  security critical machine ->Adding worker nodes should be easy, need to try

From prototype to “real” work: Physics Analysis W->e nu at high pseudo-rapidity “Natural” follow-up of work on ISL construction and commissioning The whole group (~4 people) in Pisa Datasets relatively “small” Can have physical results NOW! “PR” Plots (March) Sigma x BR (April-May) W Asymmetry (Summer)

Transverse Mass W->e nu (1.1<|eta|<2.8) QCD background subtracted No Z->ee,W->  subtraction NO improved Plug EM corr.

Resource estimate Data until January TeV shutdown (~80 pb-1): Central electrons (~420 GB) Plug electrons (~660 GB) (will be reduced with quality selection)  expect O(700 GB) + Output CPU approx 1h x 1GHz / GB (1 GB typical filesize) Present System: ~ 1pass/user/week (Not great) 5 worker nodes: ~ 1pass/user/day (OK!) Starts being appealing with respect to FNAL (~300nodes vs ~100s aggressive users!) MC needed to test different PDFs (for W asymmetry)  summer

Importing Data Crucial aspect. Only real disadvantage wrt to For the moment used parallel ftp or rcp to copy data from FNAL. Populating (poor performance so far 4 MB/s, probably limited by server at FNAL) Still investigating...

Conclusions Perform a physics analysis using is possible. Need more CPU (3 more nodes) to make jobs run faster in Italy than at FNAL. Help us preserve the enthusiasm in this project ! Disk space for coming luminosity (x3 by summer): Expect to use compressed data format Another TB this summer probably useful Would like to expand to other analysis groups after summer Optimistic scenario: in 2004 buy hw at Tier1 not FNAL plan in June, write in July, discuss in September

Slide Lasciata vuota