PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

ASU/TGen Computational Facility.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Design & Management of the JLAB Farms Ian Bird, Jefferson Lab May 24, 2001 FNAL LCCWS.
Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Status of LHCb-INFN Computing CSN1, Catania, September 18, 2002 Domenico Galli, Bologna.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Ichiro Adachi ACAT03, 2003.Dec.021 Ichiro Adachi KEK representing for computing & DST/MC production group ACAT03, KEK, 2003.Dec.02 Belle Computing System.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Database CNAF Barbara Martelli Rome, April 4 st 2006.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)
KEK CC - present and future - Mitsuaki NOZAKi (KEK)
Belle II Physics Analysis Center at TIFR
LCG Deployment in Japan
PC Farms & Central Data Recording
Emanuele Leonardi PADME General Meeting - LNF January 2017
Interoperability of Digital Repositories
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan
Presentation transcript:

PC clusters in KEK A.Manabe KEK(Japan)

22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster s Some other activity on PC cluster

22 May '01LSCC WS '013 Belle PC cluster s Utilized for experimental data production. ‘raw data’ to ‘reconstructed data’. s >400 CPUs by 3 clusters s Cooperation with SUN servers for I/O (B computer system) s Number of Users is rather small. (<5) s Major PCs are 4 CPU SMP machines. Homemade SW ‘basf’ for SMP processing. (old system was by 28cpu SMP servers)

22 May '01LSCC WS '014 Belle PC cluster (1) s since 1999 s CPU nodes - DELL PowerEdge 6300 Pentium III Xeon 500MHz+9Gdisk) x 36 s Disk nodes (2CPU+800GB (Arena) RAID DISK) x 8 s Switch 100BaseT Switch 1000BaseSX: uplink to B computer system k Installed by physicists, Rack is homemade

22 May '01LSCC WS '015 Belle PC cluster1,2 Cluster 1 Cluster 2

22 May '01LSCC WS '016 Belle PC cluster (2) s since 2000 winter s CPU nodes - Compaq Proliant DL360 (2CPU x Pentium III MHz+9Gdisk)x40 s Disk nodes (2CPU + 1.2TB RAID disk)x4 s Installed by Compaq (HW and SW) (~1week for all installation)

22 May '01LSCC WS '017 Belle PC cluster(3) s since 2001 March s CPU nodes - Compaq ProLiant D580 ( Pentium III Xeon 700MHz +50GBdisk)x60 s Switch 100BaseT to each nodes 1000BaseSX to B computer system s 5 years lease included in the B computer budget. Installation service; initial and a few/year.

22 May '01LSCC WS '018 Belle PC cluster3

22 May '01LSCC WS '019 SW Tape Library PC Disk Server SUN WS 1. PC nodes TCP data transfer SW copy a RAW data file in tape to files in Disk server. (9MB/s) 2. Production jobs running in PC nodes reading/writing from/to the file using NFS. 3. write back Reconstructed data files to a tape or HSM system.

22 May '01LSCC WS '0110 Some numbers s 1 event processing takes about 6 sec by 1GHz Pentium III s 1 job =1 exp. run ~16GB = 32files 10~20 hours by 4CPU(1PC node) s Job submission is done manually with help of Perl scripts and a DB managing exp. run information.

Belle PC cluster summary s Belle PC cluster 464 CPU ( MHz), 256MB memory /CPU ~14TB disk (RAID ~10TB,local~4TB) 100BaseT network s B computer system (data server and general users) ~40 SUN servers (each has a DTF2 tape drive) Tape Library 500TB (Sony DTF2) ~20 NFS/HSM* Disk (~10TB RAID) servers HSM=Hierarchical Storage Management System

22 May '01LSCC WS '0112 Simulation farm for the HIPAF beam line design. s High Intensity Proton Accelerator Facility 50GeV 15microA s Simulation for Neutron beam line design and neutron radiation shielding. s ~50 x (2 Pentium III) s NMTC, MCNP on MPI s Will be Installed in this year

22 May '01LSCC WS '0113 Other Activity s PC farm I/O R&D by the comp. center HPSS (Linux HPSS client API driver by IBM) Storage Area Network (with Fujitsu) s GRID Activity for ATLAS Regional Center in Japan. Gfarm (