Santa Fe 6/18/03 Timothy L. Thomas 1 “UCF” Computing Capabilities at UNM HPC Timothy L. Thomas UNM Dept of Physics and Astronomy.

Slides:



Advertisements
Similar presentations
CA 714CA Midterm Review. C5 Cache Optimization Reduce miss penalty –Hardware and software Reduce miss rate –Hardware and software Reduce hit time –Hardware.
Advertisements

Grids: Why and How (you might use them) J. Templon, NIKHEF VLV T Workshop NIKHEF 06 October 2003.
Santa Fe 6/18/03 Timothy L. Thomas 2 “UCF” Computing Capabilities at UNM HPC Timothy L. Thomas UNM Dept of Physics and Astronomy.
US CMS Testbed A Grid Computing Case Study Alan De Smet Condor Project University of Wisconsin at Madison
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Minerva Infrastructure Meeting – October 04, 2011.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
GridFTP Guy Warner, NeSC Training.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
July 2002Frédéric Fleuret - LLR CCF : The French Computing Center CCF CCF Software work at the French Computing Center (CCF) Phenix.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
1 Enabling Large Scale Network Simulation with 100 Million Nodes using Grid Infrastructure Hiroyuki Ohsaki Graduate School of Information Sci. & Tech.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Current Status of Hadron Analysis Introduction Hadron PID by PHENIX-TOF  Current status of charged hadron PID  CGL and track projection point on TOF.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Outline 3  PWA overview Computational challenges in Partial Wave Analysis Comparison of new and old PWA software design - performance issues Maciej Swat.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Grand Challenge and PHENIX Report post-MDC2 studies of GC software –feasibility for day-1 expectations of data model –simple robustness tests –Comparisons.
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Online Reconstruction used in the Antares-Tarot alert system J ü rgen Brunner The online reconstruction concept Performance Neutrino doublets and high.
LEGS: A WSRF Service to Estimate Latency between Arbitrary Hosts on the Internet R.Vijayprasanth 1, R. Kavithaa 2,3 and Raj Kettimuthu 2,3 1 Coimbatore.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
1 Radiation Length Distribution for Various Z vertex Maki Kurosawa (RIKEN) for the PHENIX Collaboration.
Aug 2003 Craig Ogilvie 1 Robust Low-pt Charm D=>eX 1)remove Dalitz e, DCA cut 2)or fit DCA distribution folded with resolution  charm yield low pt D 
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PHENIX Simulation System 1 January 12, 2000 Simulation: Status for VRDC Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
Year -2 topics simulations needs for these Barbara Jacak March 2, 2001.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Capt Froberg. Outline What is a computer? What components does a computer need? Physical “Cloud” What is a virtual computer?
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Bulk Data Transfer Activities We regard data transfers as “first class citizens,” just like computational jobs. We have transferred ~3 TB of DPOSS data.
Welcome!!! Condor Week 2006.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
ITTF “Re-View” Session, August 11, 2003 Manuel Calderón de la Barca Sánchez ITTF Evaluation for Spectra.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Intra-String Muon Timing
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Pasquale Migliozzi INFN Napoli
Off-line & GRID Computing
Emanuele Leonardi PADME General Meeting - LNF January 2017
US CMS Testbed.
Grid Canada Testbed using HEP applications
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
Proposal for a DØ Remote Analysis Model (DØRAM)
Exploring Multi-Core on
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Santa Fe 6/18/03 Timothy L. Thomas 1 “UCF” Computing Capabilities at UNM HPC Timothy L. Thomas UNM Dept of Physics and Astronomy

Santa Fe 6/18/03 Timothy L. Thomas 4

Santa Fe 6/18/03 Timothy L. Thomas 5

Santa Fe 6/18/03 Timothy L. Thomas 6

Santa Fe 6/18/03 Timothy L. Thomas 7

Santa Fe 6/18/03 Timothy L. Thomas 9

I have a 200K SU (150K LL CPU hour) grant from the NRAC of the NSF/NCSA, with which UNM HPC (“AHPCC”) is affiliated.

Peripheral Data Vs Simulation Simulation: Muons From Central Hijing (QM02 Project07) Data: Centrality by Perp > 60 (Stolen from Andrew…)

Simulated Decay Muons QM’02 Project07 PISA files (Central HIJING) Closest cuts possible from PISA file to match data (P T parent >1 GeV/c, Theta P orig Parent ) Investigating possibility of keeping only muon and parent hits for reconstruction total events distributed over Z=±10, ±20, ±38 More events available but only a factor for smallest error bar Zeff ~75 cm "(IDPART==5 || IDPART==6) && IDPARENT >6 &&IDPARENT 155 && PTHE_PRI 2002 && PTOT_PRI*sin(PTHE_PRI*acos(0)/90.) > 1." Not in fit (Stolen from Andrew…)

Now at UNM HPC: PBS Globus 2.2.x Condor-G / Condor (GDMP) …all supported by HPC staff. In Progress: A new 1.2 TB RAID 5 disk server, to host: AFS cache  PHENIX software ARGO file catalog (PostgreSQL) Local Objectivity mirror Globus 2.2.x (GridFTP and more…)

Pre-QM2002 experience with globus-url-copy… Easily saturated UNM bandwidth limitations (as they were at that time) PKI infrastructure and sophisticated error-handling are a real bonus over bbftp. (One bug, known at the time is being / has been addressed.) (at left: 10 streams used) KB/sec

Santa Fe 6/18/03 Timothy L. Thomas 15 LLDIMU.HPC.UNM.EDU

Santa Fe 6/18/03 Timothy L. Thomas 16

Santa Fe 6/18/03 Timothy L. Thomas 17

Santa Fe 6/18/03 Timothy L. Thomas 18

Santa Fe 6/18/03 Timothy L. Thomas 19

Resources Filtered event can be analyzed, but not ALL PRDF event Many trigger has overlap. Assume 90KByte/event and 0.1GByte/hour/CPU Signal TriggerLumi[nb^-1]#Event[M]Size[Gbyte]CPU[hour] 100CPU[day ] mu-mumue-mu ERT_electron MUIDN_1D_&BBCLL MUIDN_1D&MUIDS_1D&BBC LL MUIDN_1D1S&BBCL MUIDN_1D1S&NTCN MUIDS_1D&BBCLL MUIDS_1D1S&BBCLL MUIDS_1D1S&NTCS ALL PRDF ,000330,

Rough calculation of real-data processing (I/O-intensive) capabilities: 10 M events, PRDF-to-{DST+x}, both mut & mutoo; assume 3 sec/event (*1.3 for LL), 200  200 KB/event.  One pass: 7 days on 50 CPUs (25 boxes), using 56% of LL local network capacity.  My 200K “SU” (~150K LL CPU hours) allocation allows for 18 of these passes (4.2 months)  3 MB/sec Internet2 connection = 1.6 TB / 12 nights (MUIDN_1D1S&NTCN) (Presently) LL is most effective for CPU-intensive tasks: simulations can easily fill the 512 CPUs; e.g, QM02 Project 07. Caveats: “LLDIMU” is a front-end machine; LL worker node environment is different from CAS/RCS node (  P.Power…)