5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007

Slides:



Advertisements
Similar presentations
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
TechFair ‘05 University of Arlington November 16, 2005.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
DØSAR, State of the Organization Jae Yu DOSAR, Its State of Organization 7th DØSAR (3 rd DOSAR) Workshop University of Oklahoma Sept. 21 – 22, 2006 Jae.
Status of DØ Computing at UTA Introduction The UTA – DØ Grid team DØ Monte Carlo Production The DØ Grid Computing –DØRAC –DØSAR –DØGrid Software Development.
DOSAR Workshop Sept , 2007 J. Cochran 1 The State of DOSAR Outline What Exactly is DOSAR (for the new folks) Brief History Goals, Accomplishments,
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
DOSAR VO ACTION AGENDA ACTION ITEMS AND GOALS CARRIED FORWARD FROM THE DOSAR VI WORKSHOP AT OLE MISS APRIL 17-18, 2008.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Summary, Action Items and Milestones 1 st HiPCAT THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington Contact Jae Yu or Alan.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Spending Plans and Schedule Jae Yu July 26, 2002.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
CCS Overview Rene Salmon Center for Computational Science.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
HiPCAT, The Texas HPC and Grid Organization 4 th DOSAR Workshop Iowa State University Jaehoon Yu University of Texas at Arlington.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
CDF computing in the GRID framework in Santander
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
DOSAR Roadmap Jae Yu DOSAR Roadmap 5 th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007 Jae Yu Univ. of Texas, Arlington LHC Tier 3 Efforts.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
GRID OPERATIONS IN ROMANIA
The Beijing Tier 2: status and plans
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
U.S. ATLAS Tier 2 Computing Center
Cluster / Grid Status Update
LCG Deployment in Japan
OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID
Southwest Tier 2 Center Status Report
Clouds of JINR, University of Sofia and INRNE Join Together
UTFSM computer cluster
Southwest Tier 2.
DOSAR: State of Organization
Cloud Computing R&D Proposal
Grid Canada Testbed using HEP applications
High Energy Physics at UTA
High Energy Physics at UTA
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007 UTA Site Report Jae Yu Univ. of Texas, Arlington 5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007

Introduction UTA a partner of ATLAS SWT2 Actively participating in ATLAS production Kaushik De is co-leading Panda development Phase I implementation at UTACC completed and running Phase II hardware installation completed Software installation in progress MonALISA based OSG Panda monitoring implemented  Allow OSG sites to show up on the LHC Dashboard Working on DDM monitoring HEP group working with other discipline in shared use of existing computing resources Interacting with the campus HPC community Working with HiPCAT, Texas grid community 9/27/2007

UTA DPCC – The 2003 Solution UTA HEP-CSE + UTSW Medical joint project through NSF MRI Primary equipment for D0 reconstruction and MC production up to 2005 Now primarily participating in ATLAS MC production and reprocessing at part of SWT2 resources Other disciplines also use this facility but at a minimal level Biology, Geology, UTSW medical, etc Hardware Capacity PC based Linux system assisted by some 70TB of IDE disk storage 3 IBM PS157 Series Shared Memory system 9/27/2007

UTA – DPCC 84 P4 Xeon 2.4GHz CPU = 202 GHz 5TB of FBC + 3.2TB IDE Internal GFS File system 100 P4 Xeon 2.6GHz CPU = 260 GHz 64TB of IDE RAID + 4TB internal NFS File system Total CPU: 462 GHz Total disk: 76.2TB Total Memory: 168Gbyte Network bandwidth: 68Gb/sec HEP – CSE Joint Project DØ+ATLAS CSE Research 9/27/2007

SWT2 Joint effort between UTA, OU, LU and UNM 2000ft2 in the new building Designed for 3000 1U nodes Could go up to 24k cores 1MW Total power capacity Cooling with 5 Livert units 9/27/2007

Installed SWT2 Phase I Equipment 160 Node cluster (Dell SC1425) 320 cores (3.2GHz Xeon EM64T) 2GB RAM/core 160GB SATA local disk drive 8 Head Nodes (Dell 2850) Dual 3.2 GHz Xeon EM64T 8GB RAM 2x 73GB (RAID1) SCSI Storage 16TB Storage System Direct Data Networks S2A3000 system 80x250GB SATA drives 6 I/O servers IBRIX Fusion file system Dedicated internal storage network (Gigabit Ethernet) Has been operating and conducting Panda production over a year 9/27/2007

SWT2 Phase II Equipment 50 node cluster (SC 1435) 2 Head nodes 200 Cores (2.4GHz Dual Opteron 2216) 8GB RAM (2GB/core) 80 GB SATA disk 2 Head nodes Dual Opteron 2216 8 GB RAM 2x73GB (RAID1) SAS Drives 75 TB (raw) Storage System 10xMD1000 Enclosures 150x500GB SATA Disk Drives 8 I/O Nodes DCache will be used for aggregating Storage 10GB internal network capacity Hardware installation completed and software installation in progress 9/27/2007

ATLAS SWT2 (2007) SWT2-PH1@UTACC SWT2-PH2@UTACPB 320 Xeon 3.2 GHz cores = 940 GHz 2GB Ram/core = 640GB 160GB Internal/unit  25.6TB 8 dual core server nodes 16TB of storage assisted by 6I/O server Dedicated Gbit internal connections 200 Optaron 3.2 GHz cores = 640 GHz 2GB Ram/core = 400GB 8GB SATA /unit  4TB 8 dual core server nodes 75TB of storage by 10 Dell MD100 Raid assisted by 8 I/O servers Dedicated Gbit internal connections 9/27/2007

Network Capacity History at UTA Had DS3 (44.7MBits/sec) till late 2004 Choke the heck out of the network for about a month downloading D0 data for re-reconstruction Met with VP of Research at UTA and emphasized the importance of network backbone for attracting external funds Increased to OC3 (155 MBits/s) early 2005 OC12 as of early 2006 Connected to NLR (10GB/s) through LEARN (http://www.tx-learn.org/) via 1GB connection to NTGP $9.8M ($7.3M for optical fiber network) state of Texas funds approved in Sept. 2004 9/27/2007

LEARN Status 9/27/2007

NLR – National LambdaRail ONENET LONI LEARN 10GB/sec connections 9/27/2007

Software Development Activities MonALISA based ATLAS distributed analysis monitoring A good, scalable system Software development and implementation completed ATLAS-OSG sites are on the LHC Dashboard New server purchased for OSG at UTA  Activation to follow shortly Working on DDM monitoring project 9/27/2007

Centralized LHC Distributed Computing Monitor 9/27/2007

CSE Student Exchange Program Joint effort between HEP and CSE David Levine is the primary contact at CSE A total of 10 CSE MS Students each have worked in SAM-Grid team Five generations of the student Many of them playing leading roles in grid community Abishek Rana at UCSD Parag Mashilka at FNAL Sudhamsh Reddy working for UTA at BNL New program with BNL implemented First student on completed the tenure and is on job training Second set of two Ph.D. students at BNL Participating in ATLAS Panda project One student working on pilot factory using condor glide-in Working on developing middleware 9/27/2007

Conclusions MonALISA based panda monitoring activated New server needs to be brought up Working on DDM monitoring Will be involved in further DDM work Leading EG2 (photon) CSC note exercise Connected to 10GB/s NLR via 1GB connection to UTD Working closely with HiPCAT for State-wide grid activities Need to go back on to the LAW project but awaiting for successes at OU and ISU 9/27/2007