Vanderbilt University

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Data Management for Physics Analysis in PHENIX (BNL, RHIC) Evaluation of Grid architecture components in PHENIX context Barbara Jacak, Roy Lacey, Saskia.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 22 Performance of Computer Systems.
Bill Wrobleski Director, Technology Infrastructure ITS Infrastructure Services.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
July 2002Frédéric Fleuret - LLR CCF : The French Computing Center CCF CCF Software work at the French Computing Center (CCF) Phenix.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
Module 7. Data Backups  Definitions: Protection vs. Backups vs. Archiving  Why plan for and execute data backups?  Considerations  Issues/Concerns.
PHENIX Simulation System 1 July 7, 1999 Simulation Progress and ROOT-in-PISA Charles Maguire Vanderbilt University July ‘99 Software Meeting.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
Access Across Time: How the NAA Preserves Digital Records Andrew Wilson Assistant Director, Preservation.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
4/20/02APS April Meeting1 Database Replication at Remote sites in PHENIX Indrani D. Ojha Vanderbilt University (for PHENIX Collaboration)
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
CDF Production Farms Yen-Chu Chen, Roman Lysak, Stephen Wolbers May 29, 2003.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
Databases for data management in PHENIX Irina Sourikova Brookhaven National Laboratory for the PHENIX collaboration.
PHENIX Simulation System 1 January 12, 2000 Simulation: Status for VRDC Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
PHENIX Simulation System 1 September 8, 1999 Simulation Work-in-Progress: ROOT-in-PISA Indrani Ojha Banaras Hindu University and Vanderbilt.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Compute and Storage For the Farm at Jlab
Evolution of storage and data management
The SCEC CSEP TESTING Center Operations Review
Charles Maguire et (beaucoup d’) al.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
WP18, High-speed data recording Krzysztof Wrona, European XFEL
The Vanderbilt Effort in CMS Vanderbilt University
Computational Requirements
CMS-HI Offline Computing
Networking and ITS Issues for PHENIX*
The Vanderbilt Effort in CMS Vanderbilt University
Charles Maguire for VU-RHIC group
The CMS-HI Computing Plan Vanderbilt University
Vanderbilt Tier 2 Project
Preparations for the CMS-HI Computing Workshop in Bologna
R. Graciani for LHCb Mumbay, Feb 2006
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Proposal to BNL/DOE to Use ACCRE Farm for PHENIX Real Data Reconstruction in Charles Maguire for the VU PHENIX group Carie Kennedy, Paul Sheldon,
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
SYSTEMS ANALYSIS & DESIGN
Status of CMS-HI Compute Proposal for USDOE
Status of CMS-HI Compute Proposal for USDOE
Michael P. McCumber Task Force Meeting April 3, 2006
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
CMS-HI Offline Computing
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
The ATLAS Computing Model
Event Storage GAUDI - Data access/storage Framework related issues
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Vanderbilt University Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm revised version with updates from actual meeting C.F. Maguire, Paul Sheldon, Alan Tackett, Bobby Brown, Hugo Valle, Dillon Roach, and Kevin McCord Vanderbilt University March 10, 2006 (revised) Run Level 2 Meeting

Proposed Effort and Preparations ACCRE farm will reconstruct all Run6 Level2 filtered PRDFs Reconstruction (PRDFs to nanoDSTs) will be done in near real time Assuming 30 TBytes of Run6 Level2 PRDFs will be produced in 12 weeks Average transfer rate to VU would be 4.1 Mbytes/second We are specifying a 13 Mbyte/second sustained transfer rate (~100 Mbits/sec) At the meeting it was confirmed that 30 Tbytes is certainly an upper limit for Run6 The nanoDSTs will be transported back to RCF Analysis of nanoDSTs into J/Psi spectra could be done at ACCRE or RCF Preparations so far Confirmed that 30 Tbytes (or more) will be available during Run6 Files may stay on disk at VU until ~July 31 Enlisted the cooperation of David Silvermyr who did this for Run5 (Two grad students and myself at VU are committed to this effort) Staff members at ACCRE view this as very high project for their future Set up a gridFTP server on ACCRE Began network transfer tests from BNL to ACCRE (results next slide) March 10, 2006 (revised) Run Level 2 Meeting

Network Transfer Tests Bufferboxes to ACCRE (Martin, March 9-10) Used gridFTP, went to firebird.accre.vanderbilt.edu Wrote to “phenix” /scratch disk area at ACCRE Most recent tests see ~10 Mbytes/second Plan to use gridFTP in order to take advantage of existing software which is making automated transfers to CCJ (Japan) RCF bbftp servers to ACCRE (Charlie, March 10) Sources are 8 Gbytes on each of rftpexp01 and rftpexp02 Used bbftp with an ACCRE gateway node Wrote to same “phenix” /scratch disk area at ACCRE Results (bbftp is being used just as a network testing utility) 5 Mbytes/second with 1 parallel stream 12.2 Mbytes/second with 3 or 5 parallel streams Internal gridFTP tests (Bobby Brown, March 10) Source on a local ACCRE node writing to “phenix” /scratch area Saw ~20 MBytes/second independent of number of threads Also did scp which showed ~15 Mbytes/second Conclusion The 13 Mbyte/second specification should be attainable Need only to demonstrate sustainability over several days testing March 10, 2006 (revised) Run Level 2 Meeting

To Do List Continue network testing and development from bufferboxes Martin is doing these tests Initial transfers will be done manually (by Martin) After about 2 weeks, the automated transfer mechanism will be ready Obtain production macros and reco libraries for Run6 Level2 Cooperation between Carla, David, and VU (and CCJ?) David will come to Vanderbilt next week Demonstrate that Fun4All working at ACCRE for Run6 PRDFs Prove that ACCRE output is consistent with the RCF output These tests should be concluded by the end of next week (March 17) Establish the files return and archiving mechanism Disk area at RCF (should it be a dCache?), and how much Set up tape archiving system at ACCRE (save both input and output) What else?? Calibration files in database Will check with David on the periodic updating of the calibration files March 10, 2006 (revised) Run Level 2 Meeting

Backup: What is ACCRE at Vanderbilt? Advanced Computing Center for Research and Education Collaborative $8.5M computing resource funded by Vanderbilt Presently consists of over 1500 processors and 50 TB of disk (VU group has its own dedicated 4.5 TB for PHENIX simulations) Much work by Medical Center and Engineering school researchers as well as by Physics Department groups ACCRE eager to get into physics experiment reconstruction first PHENIX and then CMS Previous PHENIX Use of ACCRE First used extensively for supporting QM’02 simulations Order of magnitude increased work during QM’05 simulations QM’05 simulation effort hardly came close to tapping ACCRE’s full potential use for PHENIX Discovered that the major roadblock to expanding use was the need to gain an order of magnitude increase in sustained, reliable I/O rate back to BNL March 10, 2006 (revised) Run Level 2 Meeting