Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
Site report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2014/12/10Tomoaki Nakamura1.
Florida Tier2 Site Report USCMS Tier2 Workshop Fermilab, Batavia, IL March 8, 2010 Presented by Yu Fu For the Florida CMS Tier2 Team: Paul Avery, Dimitri.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
RAL PPD Site Update and other odds and ends Chris Brew.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
ATLAS Great Lakes Tier-2 Site Report Ben Meekhof USATLAS Tier2/Tier3 Workshop SLAC, Menlo Park, CA November 30 th, 2007.
Location: BU Center for Computational Science facility, Physics Research Building, 3 Cummington Street.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
Purdue Tier-2 Site Report US CMS Tier-2 Workshop 2010 Preston Smith Purdue University
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
BNL Oracle database services status and future plans Carlos Fernando Gamboa, John DeStefano, Dantong Yu Grid Group, RACF Facility Brookhaven National Lab,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
3/16/2005CSS/SCS/Farms and Clustered System 1 CDF Farms Transition Old Production farm (fbs/dfarm/mysql) SAM-farm (Condor/CAF/SAM)
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Scientific Computing in PPD and other odds and ends Chris Brew.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Atlas Tier 3 Overview Doug Benjamin Duke University.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
U.S. ATLAS Tier 2 Computing Center
Cluster / Grid Status Update
Southwest Tier 2 Center Status Report
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Update on Plan for KISTI-GSDC
Glasgow Site Report (Group Computing)
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
GSIAF "CAF" experience at GSI
NET2.
Presentation transcript:

Southwest Tier 2 (UTA)

Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN  UTA_CPB (name is TBD) 200 Cores – 2GB/core Opteron head nodes 75TB/65TB in Dell 10xMD1000+8xDell PE1950

Current Inventory UTA_DPCC  64 Xeon 2.4GHz Xeon 2.66GHz-1GB/core  Atlas usage ~80 Cores  Nominally 45TB in 10 ASA Raid systems Realistically using 9TB

Near Term (FY07) Addtion to UTA_CPB  400 Cores Opteron GB/core  210 TB/~180TB Dell MD PE2970 FY 2008 is TBD

Storage Infrastructure UTA_SWT2  16TB usable from 20TB raw  Single FS via IBRIX  Access via gsiftp to one gatekeeper  Will explore OSG supplied SRM 2 via BestMan UTA_CPB  65TB usable from 75TB raw  XRootD based storage  Access via gsiftp to one gatekeeper  Expect to use SRM 2 via BestMan  Testing has shown ~300MB/s write locally ~ 100MB/s from Head node via xrdcp

Software Infrastructure UTA_SWT2  Platform Rocks 3.3 (RHEL3 64bit)  Will Move to Rocks SL4.5 (X86_64) when UTA_CPB is stable  Will require Ibrix update for kernel version UTA_CPB  Rocks SL 4.5 (X86_64)  Basic installation done

Networking UTA-SWT2 Off-campus location  1Gb/s GigaMAN to UTA-CPB UTA-CPB  1Gb/s GigaMAN to North Texas Gigapop UTA-DPCC buried in Campus network (100Mb/s) NTG to LEARN  1Gb/s from UTD to S. Akard LEARN peers with NewNet in Houston at 1GB/s IPERF Testing shows  ~ 500Mb/s from BNL  Significantly Lower to BNL (Still researching)

Analysis Queues UTA-DPCC Works with AutoPilot UTA-SWT2 will be problematic until Rocks 4.3 UTA-CPB shouldn’t be a problem

Analysis Dedication UTA_SWT2, UTA_CPB  Ideally 5% reserved for immediate access  Grows to 20% of total capacity UTA_DPCC  Moves to prototype Tier3 resource  100% User Analysis

Middleware UTA-SWT2 partially converted OSG 0.8  Waiting on Gratia/APEL to be stable UTA-CPB will start at OSG 0.8 UTA-DPCC will upgrade to OSG 0.8 after UTA-CPB is stable

Supporting Local Users No support for ATLAS dedicated resources UTA-DPCC will become Tier3 resource for UTA