Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
CVMFS AT TIER2S Sarah Williams Indiana University.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Location: BU Center for Computational Science facility, Physics Research Building, 3 Cummington Street.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Role Based VO Authorization Services Ian Fisk Gabriele Carcassi July 20, 2005.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Scientific Computing in PPD and other odds and ends Chris Brew.
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Atlas Tier 3 Overview Doug Benjamin Duke University.
a brief summary for users
Western Analysis Facility
Model (CMS) T2 setup for end users
The LHCb Computing Data Challenge DC06
Presentation transcript:

Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006

Western Tier 2 was approved in July 2006 SLAC has a Open Science Grid deployment for years Has worked on testing ATLAS environments since 2005, on existing resource Successfully run ATLAS jobs via Panda/Grid and local submission New equipment will come before December. Will provide service to ATLAS in September.

The Western Tier 2 Team at SLAC Management ( 0 FTE on ATLAS fund) Richard Mount 10% (time) Chuck Boeheim 15% Randy Melen 20% Advisory Board Technical Support ( 1 FTE on ATLAS fund) Wei Yang Western Tier 2 Contact 30%-100% ATLAS has the highest priority Booker Bense Grid infrastructure 20% Lance Nakata Storage 20% Scientific Computing and Computer Services 30%

Resource for Open Science Grid Primarily used by ATLAS 4 SUN V20z in production Dual Opteron 1.8Ghz, 2GB memory, 70GB local disk OSG Gatekeeper / Gsiftp (for DQ2) GUMS 1.1 support US ATLAS VO VOMS / VOMS Admin GB NFS Space for OSG $APP, $DATA, $TMP on dedicated Solaris server 20 VA Linux 1220s for development

Resource for ATLAS MySQL replica for CondDB DQ2 site server / Web proxy for Panda Pilots 500 GB NFS space for DQ2 data 10 job slots per user to LSF batch system for grid users Access to LSF for local ATLAS users AFS space for ATLAS software Kit and environment 250 GB work space for local ATLAS users Prototype dCache

“Shared” Resource Leveraging existing Infrastructure and Expertise ~ 3700 CPU core LSF batch nodes, RHEL 3 and 4 ~ 30 CPU core interactive nodes, RHEL 3 and Scientific Linux 3 10 Gb/s to ESnet, 10 Gb/s to DOE data-intensive science, 1 GB/s to Internet 2 Expertise on OS, batch, storage, network, security, power, cooling, etc. RedHat provides very low cost license and excellent support

Challenges: Grid jobs overload LSF by checking job status frequently Cache job status information Batch nodes have no internet access Web proxy for Panda Pilots Local CondDB replica JobTransformation 11.0.X.X doesn’t use CondDB replica iptables NAT rules on batch nodes and redirect TCP traffic Security issues with DQ2 web server and MySQL Want to use LSF fair share instead of dedicated queues

Challenges, cont’d Prototype dCache Admin, PNFS, and pool nodes in Internet Free Zone Gsiftp and SRM doors have Internet access Pool nodes on Solaris 9. Get very useful info from user forum Using RHEL instead of SL provides significant benefit to SLAC SLAC ATLAS physicists will provide validation for running ATLAS code on RHEL What hardware to buy next? How much memory? Storage type? dCache NFS xrootd …

Future Plan 67% funding on storage 33% funding on CPU power Western Tier 2 is capable of providing more CPU to ATLAS due to resource sharing A Western Tier 2 web page to provide info about What resource is available, how to access Links to CERN, BNL for more generic info about ATLAS Provide user support for the Western Tier 2 via BNL RT system