US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.

Slides:



Advertisements
Similar presentations
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
CVMFS AT TIER2S Sarah Williams Indiana University.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Data management at T3s Hironori Ito Brookhaven National Laboratory.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
RAL Site report John Gordon ITD October 1999
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
US Atlas Tier 3 Overview Doug Benjamin Duke University.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
a brief summary for users
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
A full demonstration based on a “real” analysis scenario
Bernd Panzer-Steindel, CERN/IT
Simulation use cases for T2 in ALICE
Manchester HEP group Network, Servers, Desktop, Laptops, and What Sabah Has Been Doing Sabah Salih.
Operating Systems What are they and why do we need them?
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Presentation transcript:

US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007

Central catalog DB CERN disk Dataset info Other DS File Site subscription info Dataset replica info RAL T1 Lyon T1 BNL disk WT2 disk MW T2NE T2 New dataset DQ2/LRC File Transfer Service DQ2/LRC Don Quijote II ATLAS Distributed Data Management system BNL cloud DQ2 Tools dq2 listFilesInDatasetdq2_ls dq2 listDatasetReplicasdq2_get dq2 listDatasetsInSite subscript WT2 to dataset A dataset A

How local users access datasets  Use the “dq2” command:  Check files in dataset  Which site has interesting datasets  Go to that site and run jobs OR  dq2 client tools to get dataset to your own storage  dq2_ls  dq2_get  Run jobs on your laptop

Panda task 1 task 2 task 3 task 4 … task n DDM central catalog BNL DQ2/LRC Grid/LSF Storage Panda job input result  Panda uses DDM  Panda is a US ATLAS production system  Panda supports distributed analysis via grid output

Current Resource at Western Tier 2  7 % fair share of SLAC “shared” LSF batch resource for local ATLAS users  2.4 TB NFS space for user working directories  AFS space for ATLAS release  Pacman mirror for ATLAS software  Grid infrastructure. Gatekeeper/gridftp, GUMS  DQ2 site service  2.3 TB NFS space (NetApp) for DQ2 repository  500 GB NFS space for ATLAS production software  7 % LSF fair share for production None from ATLAS funding. ATLAS equipments will arrive in March/April

Gatekeeper/GridFTP/GUMSDQ2/LRCProxy GridFTPSRM LSF Master WN Cisco 6509 WN 78 x 4 Sun Fire X2200 M2 NFS servers Xrootd server SUN X4500 Xrootd server Xrootd redirector Cisco switch Desktop SLAC Firewall Internet Free Zone 1Gbit Cisco Gbit 1Gbit 10 Gbit 4 Gbit 1Gbit 54 TB usable Architecture

Western Tier 2 Plan Software:  RedHat Enterprise RHEL 3 32 bit, RHEL 4 32/64 bit  LSF 6.1 Many years of experience with LSF  Xrootd Expertise in Xrootd Experience of real usage by Babar

Western Tier 2 Plan (cont’d) Hardware:  Expect ATLAS equipment to arrive at March/April  312 CPU cores in 78 node Sun X2200 M2 AMD Opteron 2218, 4 core, 8GB memory, 2 x 250GB  54 TB usable storage in 3 Thumper Boxes Sun X4500, 18 usable TB (48 SATA drive), 4 AMD Opteron CPU core, 16GB, 4 x 1Gbit, Solaris 10 x86 and ZFS  10 Gbit external network

Resource Revised hardware plan for CPU and storage ProjectedPurchasedCurrent CPU (kSI2K) Disk (usable TB) CPU $ ratio2/31/3 Storage $ ratio1/32/3

Current Resource Allocation CPU Production50%7% LSF shares Local user50%7% LSF shares Disk Production data44%2.3 TB Production software10%0.5 TB Local user46%2.4 TB Future resource allocation is determined by ATLAS management and Western Tier 2 Advisory Committee

Resource Utilization 78 x 4 x 1.35 kSI2K421 kSI2K  6.0 x 10 4 CPU hours / day ½ for production210 kSI2K  3.0 x 10 4 CPU hours / day Future resource  Current resource Best day in February239 kSI2K  3.4 x 10 4 CPU hours / day Average in February91 kSI2K  1.3 x 10 4 CPU hours / day  CPU hours in SLAC unit  Utilization is low due to many reasons  The above numbers don’t tell us how to improve

Pictures at left column: Monthly average is more accurate than single day

Recent Activities at Western Tier 2  Developed a GridFTP Data System Interface for Xrootd  Installed a DQ site services with Xrootd backend  Working with Panda term to use this new DQ2 site and Xrootd storage  Can Pathena job READ root files from Xrootd server directly ?  Can local Athena job READ root files from Xrootd server ?  Tier 2 hardware selection and purchasing order completed

Issues  Power  18 TB will be online in April. High priority  CPU and the rest storage in racks in June/July, w/o power  10 Gbit external network installed, waiting for power  Hope to solve the current power crisis in late summer  Can dq2 client tools work with xrootd storage ? or any other non-UNIX/NFS storage ?  DDM software is better than before, but still have problems