Download presentation
Presentation is loading. Please wait.
Published byAudrey Tate Modified over 8 years ago
1
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington
2
UTA Site Report Jae Yu 4/2/20092 UTA a partner of ATLAS SWT2 –Actively participating in ATLAS production Kaushik De is co-leading Panda development Phase I implementation at UTACC completed and running Phase II implementation at Physics and Chemistry Research Building on- going –DDM monitoring project completed –JY co-leading the ATLAS Operations Support HEP group working with other discipline in shared use of existing computing resources –Interacting with the campus HPC community Working with HiPCAT, Texas grid community Co-Linux Condor cluster setup essentially stopped but will pick it back up soon… Introduction
3
UTA Site Report Jae Yu 4/2/20093 UTA HEP-CSE + UTSW Medical joint project through NSF MRI Primary equipment for D0 re-reconstruction and MC production up to 2005 Now primarily participating in ATLAS MC production and reprocessing at part of SWT2 resources Other disciplines also use this facility but at a minimal level –Biology, Geology, UTSW medical, etc –Simulation on detector development Hardware Capacity –PC based Linux system assisted by some 70TB of IDE disk storage –3 IBM PS157 Series Shared Memory systems UTA DPCC – The 2003 Solution
4
UTA Site Report Jae Yu 4/2/20094 UTA – DPCC 100 P4 Xeon 2.6GHz CPU = 260 GHz 64TB of IDE RAID + 4TB internal NFS File system 84 P4 Xeon 2.4GHz CPU = 202 GHz 5TB of FBC + 3.2TB IDE Internal GFS File system Total CPU: 462 GHz Total disk: 76.2TB Total Memory: 168Gbyte Network bandwidth: 68Gb/sec HEP – CSE Joint Project DØ+ATLAS CSE Research
5
UTA Site Report Jae Yu The Southwest Tier 2 Center is a collaboration between the University of Texas at Arlington (UTA) and the University of Oklahoma (OU) Personnel: UTA: Kaushik De, Patrick McGuigan, Victor Reece, Mark Sosebee OU: Karthik Arunachalam, Horst Severini, Pat Skubic, Joel Snow (LU) SW T2 @ UTA
6
UTA Site Report Jae Yu 183 compute nodes (732 cores): Mix of Operton 2216/2220 cpu’s, dual core, 2 GB RAM / core Front-end nodes, monitoring hosts, etc.: Mix of Operton and Xeon systems which provide cluster gateways (Globus, storage, etc.), administration 225 TB storage (usable): Based on Dell MD1000 systems xrootd is used to provide a unified file namespace Configuration of Phase II at CPB
7
UTA Site Report Jae Yu 160 compute nodes: Dual Xeon EM64T, 3.2 GHz, 4GB RAM, 160 GB disk 8 front-end nodes: Dual Xeon EM64T, 8GB RAM, 73 GB SCSI RAID 1 16 TB SAN storage (IBRIX): 80 x 250 GB SATA disks 6 I/O servers, 1 management server 16 TB potential in compute nodes Configuration of Phase I at ARDC
8
UTA Site Report Jae Yu Most pressing issue resource-wise is storage capacity Of course number of cpu’s will grow over time, but ensuring adequate storage is critical Process underway for next purchase: Approximately 600 TB (raw) in the same Dell MD1000 disk arrays + storage servers utilizing PERC5 controller cards A cluster dedicated for user analysis – Of order 100 cpu’s (cores), and 100 TB of disk We’re hoping this equipment will arrive by late April or early May Upcoming Expansion at UTA T2
9
UTA Site Report Jae Yu 4/2/20099 CSE Student Exchange Program Joint effort between HEP and CSE A total of 10 CSE MS Students each have worked in SAM- Grid team –Five generations of the student –Many of them playing leading roles in grid community Abishek Rana at UCSD Parag Mashilka at FNAL The program with BNL Panda project a mature project now –Three students completed their tenure Two obtained MS and one working on Ph.D. –One Ph.D. students at UTA working with BNL team –Participating in ATLAS Panda monitoring project
10
UTA Site Report Jae Yu 4/2/200910 Conclusions Actively engaged in preparing for the collisions at the LHC SWT2 taking its shape as a well established facility –DPCC is now getting called “old” but still being used Involved in ATLAS Computing Operations Support –Activities will pick up the speed, in particular for Working closely with HiPCAT for State-wide grid activities Co-Linux Condor Cluster setup activities is at a holt at the moment but will need to pick up the speed soon CSE student exchange program still on going but with Ph.D. students now
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.