UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.

Slides:



Advertisements
Similar presentations
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Computing at COSM by Lawrence Sorrillo COSM Center.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
TechFair ‘05 University of Arlington November 16, 2005.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
DØSAR, State of the Organization Jae Yu DOSAR, Its State of Organization 7th DØSAR (3 rd DOSAR) Workshop University of Oklahoma Sept. 21 – 22, 2006 Jae.
Status of DØ Computing at UTA Introduction The UTA – DØ Grid team DØ Monte Carlo Production The DØ Grid Computing –DØRAC –DØSAR –DØGrid Software Development.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
DOSAR Workshop Sept , 2007 J. Cochran 1 The State of DOSAR Outline What Exactly is DOSAR (for the new folks) Brief History Goals, Accomplishments,
High Energy Physics at UTA UTA faculty Andrew Brandt, Kaushik De, Andrew White, Jae Yu along with many post-docs, graduate and undergraduate students investigate.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
DOSAR VO ACTION AGENDA ACTION ITEMS AND GOALS CARRIED FORWARD FROM THE DOSAR VI WORKSHOP AT OLE MISS APRIL 17-18, 2008.
Location: BU Center for Computational Science facility, Physics Research Building, 3 Cummington Street.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Summary, Action Items and Milestones 1 st HiPCAT THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington Contact Jae Yu or Alan.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
HiPCAT, The Texas HPC and Grid Organization 4 th DOSAR Workshop Iowa State University Jaehoon Yu University of Texas at Arlington.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline  Higgs Particle Searches for Origin of Mass  Grid Computing  A brief Linear Collider Detector R&D  The  The grand conclusion: YOU are the.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
The State of DOSAR DOSAR VI Workshop at Ole Miss April Dick Greenwood Louisiana Tech University.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
DOSAR Roadmap Jae Yu DOSAR Roadmap 5 th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007 Jae Yu Univ. of Texas, Arlington LHC Tier 3 Efforts.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
DOSAR II Workshop Goals and Organizations Jae Yu DOSAR II Workshop, UTA Mar. 30 – 31, 2006 Introduction Workshop Goals Workshop Organization.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
U.S. ATLAS Tier 2 Computing Center
LCG Deployment in Japan
Southwest Tier 2 Center Status Report
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
DOSAR: State of Organization
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington

UTA Site Report Jae Yu 4/5/20072 UTA’s conversion to ATLAS experiment is complete –Kaushik De is co-leading Panda development A partner of ATLAS SW Tier 2 is at UTA Phase I implementation at UTACC completed and running Phase II implementation begun –MonALISA based OSG Panda monitoring implemented  Allow OSG sites to show up on the LHC Dashboard HEP group working with other discipline in shared use of existing computing resources –Interacting with the campus HPC community Introduction

UTA Site Report Jae Yu 4/5/20073 UTA HEP-CSE + UTSW Medical joint project through NSF MRI Primarily participating in ATLAS MC production and reprocessing Other disciplines also use this facility –Biology, Geology, UTSW medical, etc Hardware Capacity –Linux system 197 CPU’s of mixture of P4 Xeon 2.4 – 2.6 GHz Total disk: 76.2TB Total Memory: 1GB/CPU Network bandwidth: 68Gb/sec Additional equipment will be purchased (about 2 more racks) –3 IBM PS157 Series Shared Memory system 8 1.5GHz Power5 processors 32 GB RAM 6 140GB Internal Disk drives 1 2Gb fibre Channel Adapter 2 Gigabit Ethernet Nics UTA DPCC

UTA Site Report Jae Yu 4/5/20074 UTA – DPCC 100 P4 Xeon 2.6GHz CPU = 260 GHz 64TB of IDE RAID + 4TB internal NFS File system 84 P4 Xeon 2.4GHz CPU = 202 GHz 5TB of FBC + 3.2TB IDE Internal GFS File system Total CPU: 462 GHz Total disk: 76.2TB Total Memory: 168Gbyte Network bandwidth: 68Gb/sec HEP – CSE Joint Project DØ+ATLAS CSE Research

UTA Site Report Jae Yu 4/5/20075 Joint effort between UTA, OU, LU and UNM Phase I completed and is up and running Purchase for Phase II completed –Awaiting for to Dell to come and put them together SWT2 2000ft 2 in the new building Designed for 3000 computers

UTA Site Report Jae Yu 4/5/20076 SWT2 Phase I Equipment Existing Equipment –160 Node Cluster installed at Arlington Regional Data Center (ARDC), formerly UTA Computing Center (UTACC) Latest Purchase –50 Node Cluster to be installed on campus in new Chemistry and Physics Building –Installation target 5/07

UTA Site Report Jae Yu 4/5/20077 Installed Equipment 160 Node cluster (Dell SC1425) –320 cores (3.2GHz Xeon EM64T) –2GB RAM/core –160GB SATA local disk drive 8 Head Nodes (Dell 2850) –Dual 3.2 GHz Xeon EM64T –8GB RAM –2x 73GB (RAID1) SCSI Storage 16TB Storage System –Direct Data Networks S2A3000 system 80x250GB SATA drives –6 I/O servers –IBRIX Fusion file system –Dedicated internal storage network (Gigabit Ethernet)

UTA Site Report Jae Yu 4/5/20078 SWT2 Phase II Purchase 50 node cluster (SC 1435) –200 Cores (2.4GHz Dual Opteron 2216) –8GB RAM (2GB/core) –80 GB SATA disk 2 Head nodes –Dual Opteron 2216 –8 GB RAM –2x73GB (RAID1) SAS Drives 75 TB (raw) Storage System –10xMD1000 Enclosures –150x500GB SATA Disk Drives –8 I/O Nodes –DCache will be used for aggregating Storage

UTA Site Report Jae Yu 4/5/20079 Had DS3 (44.7MBits/sec) till late 2004 Increased to OC3 (155 MBits/s) early 2005 OC12 as of early 2006 Expected to be connected to NLR (10GB/s) through LEARN anytime ( –$9.8M ($7.3M for optical fiber network) state of Texas funds approved in Sept LEARN network available to DFW area –Need to connect the last few miles Network Capacity

UTA Site Report Jae Yu 4/5/ LEARN Status

UTA Site Report Jae Yu 4/5/200711

UTA Site Report Jae Yu 4/5/ Software Development Activities MonALISA based ATLAS distributed analysis monitoring –A good, scalable system –Server implemented at UTA –Modification to Panda pilot completed –ATLAS-OSG sites are on the LHC Dashboard Sudhamsh integrated well into Panda related project –Working on distributed data handling Started looking into the Linux Application on Windows (the LAW project) –To bring in enormous amount of available resources

UTA Site Report Jae Yu 4/5/ ATLAS DA Dashboard LCG sites report to One MonALISA service and one repository –CERN colleagues implemented an ATLAS DA dashboard OSG Sites different –Extremely democratic  Each site has its own MonALISA server and repository –An Apmon developed for each job to report to MonALISA server at UTA Not utilizing the democratic configuration OSG has yet –MonALISA server responds when prompted by the Dashboard Code implementation completed and ATLAS OSG sites are on LHC Dashboard

UTA Site Report Jae Yu 4/5/ CSE Student Exchange Program Joint effort between HEP and CSE –David Levine is the primary contact at CSE A total of 10 CSE MS Students each have worked in SAM- Grid team –Five generations of the student –This program ended as of Aug. 31, 2006 New program with BNL on a trial stage –First set of two students started working summer 2006 –Participating in ATLAS Panda projects –One additional student on Panda on OSG project –First generation is completing their tenure –Participating in pilot factory and panda monitoring development

UTA Site Report Jae Yu 4/5/ Conclusions UTA’s transition from DØ to ATLAS is completed ATLAS-OSG sites are on the LHC Dashboard –Our proposal for MonALISA based DA monitoring adopted as the initial direction for ATLAS monitoring system Participating in distributed data handling software development and improvement Actively participating in ATLAS CSC analyses Network capacity of 10GB/s to come anyday Working closely with HiPCAT for State-wide grid activities Looking into the LAW project