ATLAS Software Installation at UIUC “Mutability is immutable.” - Heraclitus ~400 B.C.E. D. Errede, M. Neubauer Goals: 1) ability to analyze data locally.

Slides:



Advertisements
Similar presentations
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Data Management for Physics Analysis in PHENIX (BNL, RHIC) Evaluation of Grid architecture components in PHENIX context Barbara Jacak, Roy Lacey, Saskia.
JIM Deployment for the CDF Experiment M. Burgon-Lyon 1, A. Baranowski 2, V. Bartsch 3,S. Belforte 4, G. Garzoglio 2, R. Herber 2, R. Illingworth 2, R.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
DOSAR VO ACTION AGENDA ACTION ITEMS AND GOALS CARRIED FORWARD FROM THE DOSAR VI WORKSHOP AT OLE MISS APRIL 17-18, 2008.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Efforts to Build a T3gs at Illinois Mark Neubauer, Dave Lesny University of Illinois at Urbana-Champaign US ATLAS Distributed Facilities Meeting LIGO Livingston.
…building the next IT revolution From Web to Grid…
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Accelerating Campus Research with Connective Services for Cyberinfrastructure Rob Gardner Steve Tuecke.
Scientific Computing in PPD and other odds and ends Chris Brew.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
David Adams ATLAS ADA: ATLAS Distributed Analysis David Adams BNL December 15, 2003 PPDG Collaboration Meeting LBL.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Accessing the VI-SEEM infrastructure
Belle II Physics Analysis Center at TIFR
UK GridPP Tier-1/A Centre at CLRC
Presentation transcript:

ATLAS Software Installation at UIUC “Mutability is immutable.” - Heraclitus ~400 B.C.E. D. Errede, M. Neubauer Goals: 1) ability to analyze data locally (with constraints) 2) ability to connect to other machines around the world 3) ability to contribute to the detector commissioning and calibration locally 4) ability to store reasonable quantities of data for ease of analysis 1) - not completely. 2) - not completely. 3) presently done at CERN(Irene Vichou); effort to make contributions locally. (see S. Errede’s talk) 4) Discussion with local UIUC computing center on access to available disk space ?

Ability to analyze data locally 1)Accomplished: *installation of ATLAS release software locally and also installed per machine. ( Restrictions on NFS type installation. ) Reduces disk-space usage. *ROOT software for ntuple/tree analysis exists. We do not expect to be able to generate large data sets locally of complete monte carlo data for instance because of the well known restrictions of generation time. Perhaps fast MC data can be generated locally (?). *installation of Scientific Linux 3 and 4 platform on several local linux machines *installation of Open Science Grid User Interface allowing access to large farms(as clients, not as a Computing Element). Software installed and simple condor submission tested, however all local idiosyncrasies not yet understood. Installed as User, not globally for group, yet. *installation of dq2 software to access data files from OSGrid

Ability to analyze data locally Not accomplished: *installation of OSG UI centrally for all users ability to connect to other machines around the world 2) Accomplished: *security issues related to logging into CERN lxplus machines ( LCG access, dq2 working, ROOT is prohibitively slow), SLAC linux machines, BNL linux machines. *OSG UI installation allowing access to large OSG farms, simply tested but not fully tested. ability to contribute to the detector commissioning and calibration locally 3)See Steve Errede’s talk. Presumably dealing with ROOT ntuples is the primary requirement here. ROOT exists and is usable. ability to store reasonable quantities of data for ease of analysis 4) Discussion with CITES and NCSA on access to some fraction of petabyte disk. NCSA has resources that perhaps we can tap into in the future. We already have 10 Gbit/s connection to CITES and then Chicago’s “hub”.

ARE WE DONE? ARE WE CLOSE TO BEING DONE? NO. We will be establishing a systematic analysis effort locally. (see Mark Neubauer’s talk) The HECA state of Illinois grant (D. Errede P.I.) contributed over $200k toward computing equipment etc from which will be used by the ATLAS group here, though in limited fashon due to memory constraints. 16 dual-processor boxes for muon collaboration use Next project: *setting up 3 linux machines locally (OSG00,01,X0) with SL3/4 platforms on which to install ATLAS software for testing. The software changes and is updated constantly hence we will be starting with a ‘clean slate’ on which to work. We know that some software doesn’t work ‘on top of’ other software, for instance. Important to this effort will be the authority to use root privileges on our machines. The addition of Mark Neubauer to this effort is a tremendous contribution given his background in installing the CONDOR batch queue system at CDF and hence his knowledge in other required languages such as Python and as can be seen from this good idea for the next step/project. Overall comment: because of the evolving nature of the ATLAS software what is easy at the present time would have been quite difficult earlier on.

Toward a UIUC: The “Problem” The scale of computing requirements for the LHC experiments is unprecedented in HEP’s history  ATLAS: ~3x10 3 collaborators spread across the globe  3 PB / year RAW + 1 PB / year ESD Much progress has been made on pieces necessary to get ATLAS physics done on globally distributed computing (GRID)  Many complex pieces to handle authentication, VOs, job submission/handling/monitoring, data handling, etc System needs to be exercised from the perspective of a physicist “just trying to get their physics done” (I.e. end user)  Convenient and flexible interface to authentication, dataset creation/consumption, job submission, monitoring, output retrieval  Support and reliability of service, deserving of a production system These goals have not yet been fully achieved in ATLAS but need to be for the experiment to be successful! “ The world according to Mark “

Toward a UIUC: A “Solution”? Deploy a “large” computing cluster configured as a Tier-3 and operated as a model Tier-3 in terms of reliability and utilization for doing a home institution  Q: How large is “large” A: Large enough for ATLAS computing organization to pay attention and for UIUC to get our physics done. Could be as large as most Tier-2 sites Why do this at UIUC?  We have an enormous amount of high-end computing infrastructure and technical expertise to pull this off!  Infrastructure includes HEP & MRL computing, NCSA, recent availability of 10 GB/s network pipe to Chicago (ATLAS Tier-2 Center)  Technical expertise includes HEP group, networking experts, new addition: Mark Neubauer  Neubauer: At MIT then UCSD with F. Wurthwein:  lead complete redesign of CDF analysis computing -> CDF Analysis Facility (CAF)  CAF Project Leader ( )  Involved in subsequent to migration to Condor and utilization of offsite computing (one of, if not the, first operating GRIDs in HEP) Goals: Drive ATLAS into a successful computing model from the analysis end Get our physics done, and strengthen our collaborations (e.g. FTK)

Toward a UIUC: A Prototype Have recently assembled a prototype system to begin work on ATLAS Tier- UIUC (many thanks to Dave Lesny) 3 Dual Xeon dual processor boxes, 2 Gbytes memory coming out of existing equipment.

Toward a UIUC Proto-CAF (Oct 2001) FNAL CAF + 9 DCAFs (now) Insert Picture ?

Toward a UIUC “The time to repair the roof is when the sun is shining” – JFK