CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group

Slides:



Advertisements
Similar presentations
Converting ASGARD into a MC-Farm for Particle Physics Beowulf-Day A.Biland IPP/ETHZ.
Advertisements

Santa Fe 6/18/03 Timothy L. Thomas 1 “UCF” Computing Capabilities at UNM HPC Timothy L. Thomas UNM Dept of Physics and Astronomy.
1 NCC Task Force Richard Seto NCC Task Force Meeting Dec 14, 2007 BNL.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Storage Devices. Internal / External Hard Drive Also known as hard disks Internal drive stores the operating system software, application software and.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
Operating Systems  By the end of this session, you will know: What an Operating System is. The functions it performs.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
RIKEN CCJ Project Regional computing center in Japan for BNL-RHIC experiment especially for PHENIX collaboration. CCJ serves for RHIC physics activity.
PHENIX Simulation System 1 July 7, 1999 Simulation Progress and ROOT-in-PISA Charles Maguire Vanderbilt University July ‘99 Software Meeting.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
100 Million events, what does this mean ?? STAR Grid Program overview.
Jerome Lauret RCF Advisory Committee Meeting The Data Carousel what problem it’s trying to solve the data carousel and the grand challenge the bits and.
Sensitivity of Cluster File System Access to I/O Server Selection A. Apon, P. Wolinski, and G. Amerson University of Arkansas.
Chapter 1: Introduction. 1.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 1: Introduction What Operating Systems Do Computer-System.
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
PPDG update l We want to join PPDG l They want PHENIX to join NSF also wants this l Issue is to identify our goals/projects Ingredients: What we need/want.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
PHENIX Simulation System 1 January 12, 2000 Simulation: Status for VRDC Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
Simulation Status for Year2 Running Charles F. Maguire Software Meeting May 8, 2001.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Analyzing ever growing datasets in PHENIX Chris Pinkenburg for the PHENIX collaboration.
PROOF Benchmark on Different Hardware Configurations 1 11/29/2007 Neng Xu, University of Wisconsin-Madison Mengmeng Chen, Annabelle Leung, Bruce Mellado,
1 NCC Task Force Richard Seto NCC Task Force Meeting Jan 8, 2007 BNL.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)
DCS Status and Amanda News
Applied Operating System Concepts
Analysis trains – Status & experience from operation
EMC DES-1D11 VCE Test Dumps
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Nuclear Physics Data Management Needs Bruce G. Gibbard
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
ATLAS DC2 & Continuous production
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group

02/09/2000CC-J by S.Sawada2 Contents VRDC event generation at CC-J Hardware upgrade plan in near future Very preliminary plan towards CC-J operation

02/09/2000CC-J by S.Sawada3 VRDC event CC-J Au-Au –Central arm, central bias: 40k events –Central arm, minimum bias: 80k events Hijing-PISA99-PHOOLresp PISA files and PRDF’s are being sent to via WAN. ~100 files for central arm & central bias already exist at RCF. proton-proton (Pythia-)PISA99-PHOOLresp –Central arm (open heavy flavor): 40k events –Muon arm (DY, min. bias): 318k events Trigger study –Central arm (J/  &  114k events

02/09/2000CC-J by S.Sawada4 Throughput (CPUT) Numbers are reduced to 450MHz pentium CPU Hijing: ~30min./600ev (central bias) PISA99: –Au-Au central arm, minimum bias:~200sec/event –Au-Au central arm, central bias: ~500sec/event –Proton-proton and trigger runs: vary case by case Response chain: –Au-Au central arm, minimum bias: ~15sec/event –Au-Au central arm, central bias: ~40sec/event –Proton-proton and trigger runs: vary case by case

02/09/2000CC-J by S.Sawada5 Throughput (Data transfer) Data amount for Au-Au: –PISAout: ~300MB/200ev (min. bias) ~800MB/200ev (cent. bias) –PRDF: ~1GB/200ev rcp from CC-J to BNL (spin.riken.bnl.gov): –~350kB/sec/1-rcp –~700kB/sec/2-rcp –~950kB/sec/3-rcp total throughput would be being saturated around here mainly due to the network bandwidth around RIKEN. APAN has much more band width.

02/09/2000CC-J by S.Sawada6 Problems “Successful” event ratio PISA99 –“BIMPCT”: Reported at PHENIX computing meeting on Janurary 2000 by Y.Watanabe. C.Maguire commented that it was due to FLUKA and his simulation group was going to patch this problem by skipping bad events. –“ZEBRA error”: A part of them could be reproduced again => software problem? Another part of them could not be reproduced => hardware (memory etc.) problem? PHOOL response chain –‘segmentation violation’ … Need more study.

02/09/2000CC-J by S.Sawada7 Notes File name convention –Tarun, Indrani, Tim and others have agreed with the rule, it might not be a eternal version. Runs at CC-J have 1E8 offsets. Run # database –Even now, many runs for various kinds. –Simulation work will continue for the order of decade.  Should be included in the to-do list. Script –Stable and easy-to-understand job scripts (or their templates) should be written, especially for various people to use the farm.  “script builder” in the to-do list.

02/09/2000CC-J by S.Sawada8 Notes continued Read/write via NFS –At RCF, Charlie’s team seems to read/write all the files via NFS, while we rcp (or pftp) files to/from the local disk before/after the simulation. CC-J experience shows NFS- write would be a bottle neck for the simulation with high I/O rates. (see “mu-camera” for the muon arms –might have to be developed soon. Memory usage for the response chain –was about 300MB in some cases, while we have 256MB real memory per CPU.

02/09/2000CC-J by S.Sawada9 Hardware upgrade plan / CC-J schedule More 32 CPUs (700MHz?) will be purchased by the end of March.  Total number of CPUs will be 96. –How about the status of RH6.1 tests? More 1.6TB RAID disk will be purchased by the end of March.  Total work disk will be 3.2TB. ‘Acceptance’ tests of HPSS and other components will be in February and March. Test operation for PHENIX from around mid-March? Routine operation from May?

02/09/2000CC-J by S.Sawada10 Very preliminary idea on CC-J operation See the Proposal for the PHENIX CC-J. – Discussions are going among related people. –CC-J Planing and Coordination Office will be established. –Simulation should be proposed through PWG’s? –Each PWG should define responsible persons for the simulation? –A simulation coordinator at the PHENIX side??  A detailed announcement will be at the next core week.

02/09/2000CC-J by S.Sawada11 To do Tests for replication of Objectivity/DB “script builder”? –At least good templates are necessary. If one has a good template, he/she could easily develop a GUI/web based “script builder”. Run number database?? –So many simulation, so many runs of simulated events, various conditions for runs… Documentation –Explain CC-J specific configuration, rules and so on, published on the web and/or with pdf.