Download presentation
Presentation is loading. Please wait.
Published byElla Miles Modified over 6 years ago
1
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm
Charles Maguire et al. March 14, 2006 Local Group Meeting
2
Proposed Effort and Preparations
ACCRE farm will reconstruct all Run6 Level2 filtered PRDFs Reconstruction (PRDFs to nanoDSTs) will be done in near real time Assuming 30 TBytes of Run6 Level2 PRDFs will be produced in 12 weeks Average transfer rate to VU would be 4.1 Mbytes/second We are specifying a 13 Mbyte/second sustained transfer rate (~100 Mbits/sec) At Level2 meeting last week we confirmed that 30 Tbytes is a good limit for Run6 The nanoDSTs will be transported back to RCF Analysis of nanoDSTs into J/Psi spectra could be done at ACCRE or RCF Preparations so far Confirmed that 30 Tbytes (or more) will be available during Run6 Files may stay on disk at VU until ~July 31 Enlisted the cooperation of David Silvermyr who did this for Run5 (Two grad students and myself at VU are committed to this effort) Staff members at ACCRE view this as very high project for their future Set up a gridFTP server on ACCRE Began network transfer tests from BNL to ACCRE (results next slide) March 14, 2006 Local Group Meeting
3
Network Transfer Tests
Bufferboxes to ACCRE (Martin, March 9-10) Used gridFTP, went to firebird.accre.vanderbilt.edu Wrote to “phenix” /scratch disk area at ACCRE Most recent tests see ~10 Mbytes/second Plan to use gridFTP in order to take advantage of existing software which is making automated transfers to CCJ (Japan) RCF bbftp servers to ACCRE (Charlie, March 10) Sources are 8 Gbytes on each of rftpexp01 and rftpexp02 Used bbftp with an ACCRE gateway node Wrote to same “phenix” /scratch disk area at ACCRE Results (bbftp is being used just as a network testing utility) 5 Mbytes/second with 1 parallel stream 12.2 Mbytes/second with 3 or 5 parallel streams Internal gridFTP tests (Bobby Brown, March 10) Source on a local ACCRE node writing to “phenix” /scratch area Saw ~20 MBytes/second independent of number of threads Also did scp which showed ~15 Mbytes/second Conclusion The 13 Mbyte/second specification should be attainable Need only to demonstrate sustainability over several days testing March 14, 2006 Local Group Meeting
4
To Do List (as of March 14) What else??
Continue network testing and development from bufferboxes Martin is doing these tests Initial transfers will be done manually (by Martin) After about 2 weeks, the automated transfer mechanism will be ready Test the production macros and reco libraries for Run6 Level2 Cooperation between Carla, David, and VU (Carla away at WWND this week) David will come to Vanderbilt next week; currently working remotely from ORNL We have demonstrate that Fun4All working at ACCRE for Run5 PRDFs Must prove that ACCRE output is consistent with the RCF output These tests should be concluded by the end of this week (March 17) Establish the files return and archiving mechanism (next week) Disk area at RCF (should it be a dCache?), and how much Set up tape archiving system at ACCRE (save both input and output) What else?? Calibration files in database (new database seems to be working OK) Will check with David on the periodic updating of the calibration files Develop WWW site containing all the information about this effort March 14, 2006 Local Group Meeting
5
Backup: What is ACCRE at Vanderbilt?
Advanced Computing Center for Research and Education Collaborative $8.5M computing resource funded by Vanderbilt Presently consists of over 1500 processors and 50 TB of disk (VU group has its own dedicated 4.5 TB for PHENIX simulations) Much work by Medical Center and Engineering school researchers as well as by Physics Department groups ACCRE eager to get into physics experiment reconstruction first PHENIX and then CMS Previous PHENIX Use of ACCRE First used extensively for supporting QM’02 simulations Order of magnitude increased work during QM’05 simulations QM’05 simulation effort hardly came close to tapping ACCRE’s full potential use for PHENIX Discovered that the major roadblock to expanding use was the need to gain an order of magnitude increase in sustained, reliable I/O rate back to BNL March 14, 2006 Local Group Meeting
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.