ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)

Slides:



Advertisements
Similar presentations
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Advertisements

CROSSGRID WP41 Valencia Testbed Site: IFIC (Instituto de Física Corpuscular) CSIC-Valencia ICMoL (Instituto de Ciencia Molecular) UV-Valencia 28/08/2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Task 3.5 Tests and Integration ( Wp3 kick-off meeting, Poznan, 29 th -30 th January 2002 Santiago González de la.
Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Tier 3 Computing Doug Benjamin Duke University. Tier 3’s live here Atlas plans for us to do our analysis work here Much of the work gets done here.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
Fabric Monitoring at the INFN Tier1 Felice Rosso on behalf of INFN Tier1 Joint OSG & EGEE Operations WS, Culham (UK)
ATLAS Distributed Analysis Dietrich Liko. Thanks to … pathena/PANDA: T. Maneo, T. Wenaus, K. De DQ2 end user tools: T. Maneo GANGA Core: U. Edege, J.
PROOF Farm preparation for Atlas FDR-1 Wensheng Deng, Tadashi Maeno, Sergey Panitkin, Robert Petkus, Ofer Rind, Torre Wenaus, Shuwei Ye BNL.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Max Baak 1 Efficient access to files on Castor / Grid Cern Tutorial Max Baak, CERN 30 October 2008.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Presented by: Álvaro Fernandez Casani IFIC – Valencia (Spain) eScience Intrastructure T2-T3 for High Energy Physics Santiago.
ATLAS Tier3s Santiago González de la Hoz IFIC-Valencia S. González de la Hoz, II PCI 2009 Workshop, Valencia, 18/11/2010 ATLAS Tier3s (Programa de Colaboración.
Analysis Facility Infrastructure: ATLAS in Valencia S. González de la Hoz (On behalf of Tier3 Team) IFIC – Institut de Física Corpuscular de València First.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Status: ATLAS Grid Computing
Doug Benjamin Duke University On Behalf of the Atlas Collaboration
Belle II Physics Analysis Center at TIFR
ATLAS Distributed Analysis tests in the Spanish Cloud
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
PROOF – Parallel ROOT Facility
A full demonstration based on a “real” analysis scenario
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
The ATLAS software in the Grid Alessandro De Salvo <Alessandro
ATLAS Distributed Analysis tests in the Spanish Cloud
MonteCarlo production for the BaBar experiment on the Italian grid
Presentation transcript:

ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)

Santiago González de la Hoz, T3 Task Force 19-Oct Tier 3 prototype at IFIC Desktop Or Laptop (I) Atlas Collaboration Tier2 resources (Spanish T2) (II) ATLAS Tier3 resources (Institute) (II) Special requirements – PC farm to perform interactive analysis (III)

Santiago González de la Hoz, T3 Task Force 19-Oct Individual Desktop or Laptop  Access to ATLAS software via AFS (Athena, root, atlantis, etc..) /afs/ific.uv.es/project/atlas/software/releases  This is not easy, we have used the installation kit but this is not working for development or nightly releases Local checks, to develop analysis code before submitting larger jobs to the Tier1-2 via Grid  Use the ATLAS Grid resources (UI) /afs/ific.uv.es/sw/LCG-share/sl3/etc/profile.d/grid_env.sh  DQ2 installed at IFIC AFS /afs/ific.uv.es/sw/LCG-share/sl3/etc/profile.d/grid_env.sh Users could search data and copy them to the local SE.  Ganga client installed /afs/ific.uv.es/project/atlas/software/ganga/install/etc/setup-atlas.sh Users can send jobs to the Grid for their analysis or production LXPLUSLXPLUS

Santiago González de la Hoz, T3 Task Force 19-Oct Phase (I) done  From any PC at IFIC with AFS (i.e )  Requirements file at cmthome: # set CMTSITE STANDALONE set SITEROOT /afs/ific.uv.es/project/atlas/software/releases macro ATLAS_DIST_AREA /afs/ific.uv.es/project/atlas/software/releases macro ATLAS_TEST_AREA ${HOME}/testarea apply_tag projectArea macro SITE_PROJECT_AREA ${SITEROOT} macro EXTERNAL_PROJECT_AREA ${SITEROOT} apply_tag setup apply_tag simpleTest use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) set CMTCONFIG i686-slc3-gcc323-opt set DBRELEASE_INSTALLED #  Release test: source /afs/ific.uv.es/project/atlas/software/releases/CMT/v1r19/mgr/setup.sh cd $HOME/cmthome/ cmt config /usr/kerberos/bin/kinit -4 source ~/cmthome/setup.sh -tag=12.0.6,32 cd $HOME/testarea/12.0.6/ cmt co -r UserAnalysis PhysicsAnalysis/AnalysisCommon/UserAnalysis cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt source setup.sh gmake

Santiago González de la Hoz, T3 Task Force 19-Oct Local SE STORM & Lustre GANGA Athena AOD Analysis ROOT/PROOF DPD or Ntuple Analysis -Ways to submit our jobs to other grid sites -Tools to transfer data -Work with ATLAS software -Use of final analysis tools (i.e. root) -User disk space

Santiago González de la Hoz, T3 Task Force 19-Oct Phase (II) resources coupled to Spanish ATLAS TIER2 (in progress)  Nominal: ATLAS Collaboration resources: TB (SE) and CPU (WN)  Tier2 Extra resources: WNs and SEs used only by Tier3 users Using different/share queues To run local and private production  Production of MC samples of special interest for our institute  (AOD for further analysis) To analyze AOD using GRID resources (AOD analysis on millions of events) To store interesting data for analysis

Santiago González de la Hoz, T3 Task Force 19-Oct Phase (III) a PC farm to perform interactive analysis outside Grid (it would be deployed) - Interactive analysis: DPD analysis (i.e. HightPTview, SAN, or AODroot..) - Install PROOF in a PC farm: - Parallel ROOT facility. System for interactive analysis of very large sets of ROOT data files (see Sergey’s slides on 5 th Oct) - Outside the Grid nodes - Fast Access to the data: Lustre and/or xrootd - Storm and Lustre under evaluation at IFIC - Xrootd (see Sergy’s slides) - Tier3 Grid and non-Grid resources are going to use the same SE

Santiago González de la Hoz, T3 Task Force 19-Oct  StoRM Disk based SE Under evaluation Now using UnixfsSRM (dcache SRM version )for production in Tier2  Lustre in production in our Tier2 High performance file system Standard file system, easy to use Higher IO capacity due to the cluster file system Used in supercomputer centers Free version available  Hardware 2 clients for GridFTP and SRM 2 disk servers with 17 TB each one (34 TB in total)

Santiago González de la Hoz, T3 Task Force 19-Oct Athena analysis AOD's 4 MB/s CPU limited and Athena Tests: RFIO without Castor

Santiago González de la Hoz, T3 Task Force 19-Oct The same test with DPD Both cases Lustre was used and data in the cache 1.8 MB/s with Root 340 MB/s with a simple “cat” 2 x Intel Xeon 3.06GHz 4 GBytes RAM 1 NIC Gigabit Ethernet HD: ST AS (Using a 3Ware card: LP)

Santiago González de la Hoz, T3 Task Force 19-Oct Summary  From her/his desktop/laptop individual physicist can get access to: IFIC Tier2-Tier3 resources. ATLAS software (Athena, Atlantis, etc..), DDM/dq2 and Ganga tools.  IFIC Tier3 resources will be split in two parts: Some resources coupled to IFIC Tier2 (ATLAS Spanish T2) in a Grid environment  AOD analysis on millions of events geographically distributed. A PC farm to perform interactive analysis outside Grid  To check and validate major analysis task before submitting them to large computer farms.  A PROOF farm will be installed to do interactive analysis.