Download presentation
Presentation is loading. Please wait.
1
Santa Fe 6/18/03 Timothy L. Thomas 1 “UCF” Computing Capabilities at UNM HPC Timothy L. Thomas UNM Dept of Physics and Astronomy
4
Santa Fe 6/18/03 Timothy L. Thomas 4
5
Santa Fe 6/18/03 Timothy L. Thomas 5
6
Santa Fe 6/18/03 Timothy L. Thomas 6
7
Santa Fe 6/18/03 Timothy L. Thomas 7
9
Santa Fe 6/18/03 Timothy L. Thomas 9
10
I have a 200K SU (150K LL CPU hour) grant from the NRAC of the NSF/NCSA, with which UNM HPC (“AHPCC”) is affiliated.
11
Peripheral Data Vs Simulation Simulation: Muons From Central Hijing (QM02 Project07) Data: Centrality by Perp > 60 (Stolen from Andrew…)
12
Simulated Decay Muons QM’02 Project07 PISA files (Central HIJING) Closest cuts possible from PISA file to match data (P T parent >1 GeV/c, Theta P orig Parent 155-161) Investigating possibility of keeping only muon and parent hits for reconstruction. 17100 total events distributed over Z=±10, ±20, ±38 More events available but only a factor for smallest error bar Zeff ~75 cm "(IDPART==5 || IDPART==6) && IDPARENT >6 &&IDPARENT 155 && PTHE_PRI 2002 && PTOT_PRI*sin(PTHE_PRI*acos(0)/90.) > 1." Not in fit (Stolen from Andrew…)
13
Now at UNM HPC: PBS Globus 2.2.x Condor-G / Condor (GDMP) …all supported by HPC staff. In Progress: A new 1.2 TB RAID 5 disk server, to host: AFS cache PHENIX software ARGO file catalog (PostgreSQL) Local Objectivity mirror Globus 2.2.x (GridFTP and more…)
14
Pre-QM2002 experience with globus-url-copy… Easily saturated UNM bandwidth limitations (as they were at that time) PKI infrastructure and sophisticated error-handling are a real bonus over bbftp. (One bug, known at the time is being / has been addressed.) (at left: 10 streams used) KB/sec
15
Santa Fe 6/18/03 Timothy L. Thomas 15 LLDIMU.HPC.UNM.EDU
16
Santa Fe 6/18/03 Timothy L. Thomas 16
17
Santa Fe 6/18/03 Timothy L. Thomas 17
18
Santa Fe 6/18/03 Timothy L. Thomas 18
19
Santa Fe 6/18/03 Timothy L. Thomas 19
20
Resources Filtered event can be analyzed, but not ALL PRDF event Many trigger has overlap. Assume 90KByte/event and 0.1GByte/hour/CPU Signal TriggerLumi[nb^-1]#Event[M]Size[Gbyte]CPU[hour] 100CPU[day ] mu-mumue-mu ERT_electron19313.01170117004.9 1 MUIDN_1D_&BBCLL123834.030603060012.8111 MUIDN_1D&MUIDS_1D&BBC LL1 590.2181800.11 MUIDN_1D1S&BBCL12544.843243201.8111 MUIDN_1D1S&NTCN23018.01620162006.81 MUIDS_1D&BBCLL127410.796396304.0111 MUIDS_1D1S&BBCLL12931.311711700.51 MUIDS_1D1S&NTCS2785.045045001.91 ALL PRDF3506600.033,000330,000137.5
21
Rough calculation of real-data processing (I/O-intensive) capabilities: 10 M events, PRDF-to-{DST+x}, both mut & mutoo; assume 3 sec/event (*1.3 for LL), 200 200 KB/event. One pass: 7 days on 50 CPUs (25 boxes), using 56% of LL local network capacity. My 200K “SU” (~150K LL CPU hours) allocation allows for 18 of these passes (4.2 months) 3 MB/sec Internet2 connection = 1.6 TB / 12 nights (MUIDN_1D1S&NTCN) (Presently) LL is most effective for CPU-intensive tasks: simulations can easily fill the 512 CPUs; e.g, QM02 Project 07. Caveats: “LLDIMU” is a front-end machine; LL worker node environment is different from CAS/RCS node ( P.Power…)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.