Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.

Slides:



Advertisements
Similar presentations
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Data Management for Physics Analysis in PHENIX (BNL, RHIC) Evaluation of Grid architecture components in PHENIX context Barbara Jacak, Roy Lacey, Saskia.
July 2002Frédéric Fleuret - LLR CCF : The French Computing Center CCF CCF Software work at the French Computing Center (CCF) Phenix.
Dynamic Firewalls and Service Deployment Models for Grid Environments Gian Luca Volpato, Christian Grimm RRZN – Leibniz Universität Hannover Cracow Grid.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
David N. Wozei Systems Administrator, IT Auditor.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
HBD HV Control and Monitoring System INSTALLATION Manuel Proissl HBD Meeting 11/18/2008.
CHEP Sep Andrey PHENIX Job Submission/Monitoring in transition to the Grid Infrastructure Andrey Y. Shevel, Barbara Jacak,
4/20/02APS April Meeting1 Database Replication at Remote sites in PHENIX Indrani D. Ojha Vanderbilt University (for PHENIX Collaboration)
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Databases for data management in PHENIX Irina Sourikova Brookhaven National Laboratory for the PHENIX collaboration.
PHENIX Simulation System 1 January 12, 2000 Simulation: Status for VRDC Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Managing and Monitoring the Microsoft Application Platform Damir Bersinic Ruth Morton IT Pro Advisor Microsoft Canada
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
Transaction Processing Concepts Muheet Ahmed Butt.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Charles Maguire et (beaucoup d’) al.
The Vanderbilt Effort in CMS Vanderbilt University
CMS-HI Offline Computing
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Networking and ITS Issues for PHENIX*
The Vanderbilt Effort in CMS Vanderbilt University
Charles Maguire for VU-RHIC group
The CMS-HI Computing Plan Vanderbilt University
Vanderbilt Tier 2 Project
CMS transferts massif Artem Trunov.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Preparations for the CMS-HI Computing Workshop in Bologna
R. Graciani for LHCb Mumbay, Feb 2006
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Proposal to BNL/DOE to Use ACCRE Farm for PHENIX Real Data Reconstruction in Charles Maguire for the VU PHENIX group Carie Kennedy, Paul Sheldon,
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
TeraScale Supernova Initiative
Status of CMS-HI Compute Proposal for USDOE
Status of CMS-HI Compute Proposal for USDOE
Michael P. McCumber Task Force Meeting April 3, 2006
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
CMS-HI Offline Computing
The “Other” STAR-PHENIX Discrepancy Differences in the f analyses
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March 14, 2007 Analysis Meeting

Proposed Effort and Preparations ACCRE farm will reconstruct ~10% Run7 Min Bias filtered PRDFs Reconstruction (PRDFs to nanoDSTs) will be done in near real time Have confirmed that 50-75 CPUs and 50-75 TBytes are available The nanoDSTs will be transported back to RCF in real time (where?) Preparations so far Copied Run4 Au+Au PRDFs to Vanderbilt (same run as for CCF) Reconstructed with pro.74 library using Fun4Everyone_new.C macro Re-using machinery from Run6 Level2 reconstruction project GridFTP server on ACCRE File handling scripts written by David Silvermyr and myself Database updating software written by Hugo Valle (still being modified) Began new network transfer tests BNL <--> ACCRE (results next slide) March 14, 2007 Analysis Meeting

GridFTP Network Transfer Tests ACCRE to rftpexp01/rftpexp02 (February 14) Source on a local ACCRE node writing to /data/phenix areas Two GridFTP jobs running simultaneously, same 8 GByte file Saw 43.4 MBytes/second (21.91 + 21.53 Mbytes/second) New issues (2007) ACCRE engineers have security concerns about GridFTP Want to meet with BNL people to address those concerns What protections are other sites taking who use GridFTP Limited number of boxes at BNL which will be used? March 14, 2007 Analysis Meeting

To Do List (as of March 14) Resume network testing and development from bufferboxes Martin (or me with my own 1008 account ) is to do these tests Will use the Run 6 automated transfer mechanism Test the production macros and reco libraries for Run7 Min Bias Cooperation between Carla, Raphael, and VU Analysis of nanoDSTs for quality control (what should we look at?) Establish the files return and archiving mechanism (next week) Disk area at RCF (should it be a dCache?), and how much Possible tape archiving system at ACCRE (save both input and output) People on task at VU Ivan Danchev (research associate) Hugo Valle (third year graduate student) Ron Belmont (second year graduate student) Dillon Roach (second year graduate student) ACCRE engineers monitoring 24/7 March 14, 2007 Analysis Meeting

Backup: What is ACCRE at Vanderbilt? Advanced Computing Center for Research and Education Collaborative $8.5M computing resource funded by Vanderbilt Presently consists of over 1500 processors and 100 TB of disk (VU group has its own dedicated 4.5 TB for PHENIX simulations) Much work by Medical Center and Engineering school researchers as well as by Physics Department groups ACCRE eager to continue with physics experiment reconstruction first PHENIX and then CMS (new encouragement from DOE) Previous PHENIX Use of ACCRE First used extensively for supporting QM’02 simulations Order of magnitude increased work during QM’05 simulations Run6 Level2 PRDFs reconstructed in real time March 14, 2007 Analysis Meeting