3/27/2007Grid Efforts in Belle1 Hideyuki Nakazawa (National Central University, Taiwan), Belle Collaboration, KEK.

Slides:



Advertisements
Similar presentations
H b and other  (5S) results at Belle D. Santel Rencontres de Blois June 1, Introduction to h b and  (5S) 2.Search for the h b 3. Observation of.
Advertisements

Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
New  (5S) decays from Belle George W.S. Hou (NTU) NTHU HEP, 11/08/ Observation of Anomalous e + e    (nS) h + h  Production at  (5S) Energies.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Plan on Grid at KEK/CRC Go Iwai, KEK/CRC On behalf of KEK Data Grid Team Links.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Digital Divide in HEP in and to Japan ICFA Workshop on HEP networking at Daegu on May 23-27, 2005.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
SRB system at Belle/KEK Yoshimi Iida CHEP 04, Interlaken 29 September 2004.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
July 15-19, 2003 Lattice 2003, Tsukuba, Ibaraki, Japan Masashi Hazumi (KEK) 1 Masashi Hazumi (KEK) for the Belle collaboration Present and Future of KEK.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Recent Activities on Grid at KEK Go Iwai, KEK/CRC On behalf of KEK Data Grid.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Ichiro Adachi ACAT03, 2003.Dec.021 Ichiro Adachi KEK representing for computing & DST/MC production group ACAT03, KEK, 2003.Dec.02 Belle Computing System.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Site Report from KEK, Japan JP-KEK-CRC-01 and JP-KEK-CRC-02 Go Iwai, KEK/CRC Grid Operations Workshop – 2007 Kungliga Tekniska högskolan, Stockholm, Sweden.
Management System of Event Processing and Data Files Based on XML Software Tools at Belle Ichiro Adachi, Nobu Katayama, Masahiko Yokoyama IPNS, KEK, Tsukuba,
Christoph Schwanda1 The Belle B Factory Past, present and future Christoph Schwanda, Innsbruck, Oct-18, 2006.
Direct CP violation in    decays at Belle Yuuj i Unno Hanyang university (For the Belle Collaboration) June 16 th -21 st, Seoul, Korea.
Υ Strange Beauty and other Beasts from Υ(5S) at Belle Rencontres de Moriond/Electroweak, March, 2010 Υ (5S) ResonanceΥ (5S) Resonancemotivationdata recent.
Hot Belle George W.S. Hou (NTU) FPCP08, 5/5/08 1 Hot Topics from Belle May 5, 2008, NTU, Taipei.
Beauty2005, 6/20/05, R.Itoh1 Measurements of  1 (  ) at Belle Ryosuke Itoh, KEK representing The Belle Collabration Beauty 2005 Assisi, Perugia, Italy.
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center.
SiGNET – Slovenian Production Grid Marko Mikuž Univ. Ljubljana & J. Stefan Institute on behalf of SiGNET team ICFA DDW’06 Kraków, 10 th October 2006.
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Computing Research Center, High Energy Accelerator Organization (KEK) Site Status Report Go Iwai, KEK/CRC, Japan WLCG Tier-2 Workshop Dec. 1 ~ 4, 2006.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
 1 measurements with tree-level processes at Belle O. Tajima (KEK) Belle collaboration ICHEP06 July 28, 2006.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
New  (5S) decays from Belle George W.S. Hou (NTU) SLAC Expt ’ l, 11/15/ Observation of Anomalous e + e    (nS) h + h  Production at  (5S) Energies.
Recent Status and Future Plans at KEK Computing Research Center Nov. 28 th, 2007 at the 3rd CC-IN2P3 - CRC-KEK Meeting KEK Computing Research Center Setsuya.
APAN HEP Workshop Introduction Tuesday 21 January, 2003 APAN Fukuoka Conference
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
1 Belle & BaBar “Competition” Y.Sakai, KEK ASEPS 24-March-2010.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
KEK CC - present and future - Mitsuaki NOZAKi (KEK)
Performance measurement of transferring files on the federated SRB
The Beijing Tier 2: status and plans
Overview of the Belle II computing
LCG Deployment in Japan
Status and Plans on GRID related activities at KEK
Grid Computing for the ILC
New Result on (5S) Decay from Belle
gLite deployment and operation toward the KEK Super B factory
Interoperability of Digital Repositories
R. Graciani for LHCb Mumbay, Feb 2006
KEKB_Ring_3km_circumference
Current Grid System in Belle
Presentation transcript:

3/27/2007Grid Efforts in Belle1 Hideyuki Nakazawa (National Central University, Taiwan), Belle Collaboration, KEK

3/27/2007Grid Efforts in Belle2 Out Line Belle experiment Computing system in Belle LCG at KEK and Belle VO status Introduction of SRB Summary

3/27/2007Grid Efforts in Belle3 KEKB Accelerator Asymmetric e+e- collider 3.5 GeV on 8 GeV 3 km circumference 22mrad Crossing Angle Continuous Injection Belle Detector Generic purpose 7 sub-detectors “B factory” experiment at KEK (Japan). Belle KEKB Linac 3km Mt. Tsukuba Belle Experiment

3/27/2007Grid Efforts in Belle4 Belle Collaboration 13 countries, 57 institutes, ~400 collaborators IHEP, Vienna ITEP Kanagawa U. KEK Korea U. Krakow Inst. of Nucl. Phys. Kyoto U. Kyungpook Nat’l U. EPF Lausanne Jozef Stefan Inst. / U. of Ljubljana / U. of Maribor U. of Melbourne Aomori U. BINP Chiba U. Chonnam Nat’l U. U. of Cincinnati Ewha Womans U. Frankfurt U. Gyeongsang Nat’l U. U. of Hawaii Hiroshima Tech. IHEP, Beijing IHEP, Moscow Nagoya U. Nara Women’s U. National Central U. Nat’l Kaoshiung Normal U. National Taiwan U. National United U. Nihon Dental College Niigata U. Osaka U. Osaka City U. Panjab U. Peking U. U. of Pittsburgh Princeton U. Riken Saga U. USTC Seoul National U. Shinshu U. Sungkyunkwan U. U. of Sydney Tata Institute Toho U. Tohoku U. Tohuku Gakuin U. U. of Tokyo Tokyo Inst. of Tech. Tokyo Metropolitan U. Tokyo U. of Agri. and Tech. Toyama Nat’l College U. of Tsukuba Utkal U. VPI Yonsei U. Lots of contribution from Taiwan

3/27/2007Grid Efforts in Belle5 Luminosity Produce large amount of B mesons!! peak luminosity × cm -2 s fb fb - 1 ~ 10 6 BB Integrated Luminosity ( fb - 1 ) ● Crab Cavity installed, being tuned now. being tuned now. Luminosity doubled? Luminosity doubled? Integrated Luminosity 1 fb -1 ~ 1TB / day

3/27/2007Grid Efforts in Belle6 History of Belle computing system Performance years years years Computing Server [SPECint2000 rate] ~100 (WS) ~1,250 (WS+PC) ~42,500 (PC) Disk Capacity [TB] ~4~91000 Tape Library Capacity [TB] ,500 Work Group Server [# of hosts] 3+(9) FS User Workstation [# of hosts] 25WS +68X 23WS +100PC 128PC

3/27/2007Grid Efforts in Belle7 Overview of the B Computer Storage Computing Servers Workgroup Servers reserved for Grid On-line Reconstruction Farm

3/27/2007Grid Efforts in Belle8 Belle System Computing Server: ~42,500 SPECint2K Storage System (DISK): 1PB Storage System (HSM): 3.5PB

3/27/2007Grid Efforts in Belle9 Data Production at Belle online reconstruction farm “MDST” data (four vector, PID info etc.) (four vector, PID info etc.) rawdata + “DST” data production Users' analyes hadron 120TB + others ~ 1PB MC Generation and Detector Simulation 2.5THz (to finish in 6 months) 2THz (to finish in 2 months) HSM non-HSM Loose Selection

3/27/2007Grid Efforts in Belle10 Why Grid in Belle? No urgent requirement No urgent requirement Belle shifts to precise and exotic measurement Belle shifts to precise and exotic measurement More MC statistics necessary for precise measurement More MC statistics necessary for precise measurement New skim for exotic process New skim for exotic process Lesson in de facto standard Lesson in de facto standard Maybe we should start considering about Grid Just my feeling

3/27/2007Grid Efforts in Belle11 Grid Introduction Strategy Strong support from KEK CRC Strong support from KEK CRC Starting with MC production and accumulating experiences, gradually shift to handle experimental data Starting with MC production and accumulating experiences, gradually shift to handle experimental data Recruitment Recruitment Some collaborators who have running LCG are preparing to join the Belle VO Some collaborators who have running LCG are preparing to join the Belle VO Experiencing Grid potential may change Experiencing Grid potential may change Belle’s recognition ?

3/27/2007Grid Efforts in Belle12 LCG Deployment at KEK JP-KEK-CRC-01JP-KEK-CRC-01 Since Nov Since Nov Registered to GOC, in operation as WLCG Registered to GOC, in operation as WLCG Site Role: Site Role: practice for production system JP-KEK-CRC-02. practice for production system JP-KEK-CRC-02. test use among university groups in Japan. test use among university groups in Japan. Resource and Component: Resource and Component: SL w/ gLite-3.0 later SL w/ gLite-3.0 later CPU: 14, Storage: ~1.5TB CPU: 14, Storage: ~1.5TB FTS, FTA, RB, MON, BDII, LFC, CE, SE FTS, FTA, RB, MON, BDII, LFC, CE, SE Supporting VOs: Supporting VOs: belle, apdg, g4med, ppj, dteam, ops and ail belle, apdg, g4med, ppj, dteam, ops and ail JP-KEK-CRC-02JP-KEK-CRC-02 Since early Since early Registered to GOC, in operation as WLCG Registered to GOC, in operation as WLCG Site Role: Site Role: More stable services based on KEK-1 experiences. More stable services based on KEK-1 experiences. Resource and Component: Resource and Component: SL or SLC w/ gLite-3.0 later SL or SLC w/ gLite-3.0 later CPU: 48, Storage: ~1TB (w/o HPSS) CPU: 48, Storage: ~1TB (w/o HPSS) Full components Full components Supporting VOs: Supporting VOs: belle, apdg, g4med, atlasj, ppj, ilc, dteam, ops and ail belle, apdg, g4med, atlasj, ppj, ilc, dteam, ops and ail Operation is supported by great efforts by APROC members in ASGC.

3/27/2007Grid Efforts in Belle13 Belle VO 9 sites Belle software are installed to 3 sites (KEK x2, ASGC) ~60 CPUs 2TB storage MC production ongoing Installation manual ready GFAL with Belle software

3/27/2007Grid Efforts in Belle14 Total Number of Jobs at KEK in 2006 JP-KEK-CRC-01JP-KEK-CRC-01JP-KEK-CRC-02JP-KEK-CRC ,000 1,400 BelleBelleBelleBelle

3/27/2007Grid Efforts in Belle15 Total CPU Time at KEK in 2006 (Normalized by 1kSI2K) JP-KEK-CRC-01JP-KEK-CRC-01JP-KEK-CRC-02JP-KEK-CRC-02 4,000 3,000 1,000 [hrs kSI2K] 12,000 10,000 4,000 BelleBelleBelleBelle

3/27/2007Grid Efforts in Belle16 Logical Site Overview KEK Firewall SuperSINETSuperSINET HSMHSM Grid LAN KEK /24KEK /24 KEK-DMZKEK-DMZ MCAT /24MCAT / /21SRB / /21SRB /24 SRB-DSI /22SRB-DSI /22 KEK-CC KEK /22KEK /22 $ scp output Belle: $ scp input Grid: Local files CPUs WS

3/27/2007Grid Efforts in Belle17 SRB Introduction Schedule Construction Planning Grid Grid Belle Operation Belle Operation Networking Networking KEKCC/IBM KEKCC/IBM Construction Planning Grid Grid Belle Operation Belle Operation Networking Networking KEKCC/IBM KEKCC/IBM MCATMCAT SRBSRB FWFW SRB-DSISRB-DSI TestTest ConnectionConnection Start Operation Preparation

3/27/2007Grid Efforts in Belle18 Belle Grid Deployment Future Plan Federate with Japanese universities. Federate with Japanese universities. KEK hosts the Belle experiment and behaves as Tier-0. KEK hosts the Belle experiment and behaves as Tier-0. Univ. with reasonable resources: full LCG (Tier-1) Univ. with reasonable resources: full LCG (Tier-1) Univ. without resources: UI Univ. without resources: UI The central services such as VOMS, LFC and FTS are provided by KEK. The central services such as VOMS, LFC and FTS are provided by KEK. KEK also covers web Information and support service. KEK also covers web Information and support service. Grid operation is co- operated with 1~2 staffs in each full LCG site. Grid operation is co- operated with 1~2 staffs in each full LCG site. JP-KEK-CRC-02 JP-KEK-CRC-03 University UI University UI University UI University UI University UI University UI University UI University UI University UI Tier-0 Tier-1 deploy in the future preliminary design

3/27/2007Grid Efforts in Belle19 Summary Belle VO launched Belle software are installed to 3 sites KEK sites are mainly used by Belle MC production ongoing SRB is being introduced

3/27/2007Grid Efforts in Belle20 Additonal (Belle's) Resources We now obtain high-performance computer system; but we didn't suddenly switch to the “less expensive” system. 350TB disks 1.5PB tapes 934 CPUs 20units/20TB We have been testing such system for several years. ● ● Linux based PC clusters ● ● S-ATA disk based RAID drives ● ● S-AIT tape drives 1000TB disks 3.5PB tapes 2280 CPUs B computer for comparison These resources have been essential for Belle (production/analysis)

3/27/2007Grid Efforts in Belle21 Belle Grid Deployment Plan We are planning a 2-phased deployment for BELLE experiments. We are planning a 2-phased deployment for BELLE experiments. Phase-1: BELLE user uses VO in JP-KEK-CRC-02 sharing with other VOs. Phase-1: BELLE user uses VO in JP-KEK-CRC-02 sharing with other VOs. JP-KEK-CRC-02 consists of “Central Computing System” maintained by IBM corporation. JP-KEK-CRC-02 consists of “Central Computing System” maintained by IBM corporation. Available resources: Available resources:  CPU: 72 processors (opteron), SE: 200TB (with HPSS) Phase-2: Deployment of JP-KEK-CRC-03 as BELLE Production System Phase-2: Deployment of JP-KEK-CRC-03 as BELLE Production System JP-KEK-CRC-03 uses a part of “B Factory Computer System” resources. JP-KEK-CRC-03 uses a part of “B Factory Computer System” resources. Available resources (maximum estimation) Available resources (maximum estimation)  CPU: 2200 CPU, SE: 1PB (disk), 3.5 PB (HSM) This system will be maintained by CRC and NetOne corporation. This system will be maintained by CRC and NetOne corporation.

3/27/2007Grid Efforts in Belle22 Computing Servers ● ● DELL Power Edge 1855 Xeon 3.6GHz x2 Memory 1GB ● ● Made in Taiwan [Quanta] ● ● WG: 80 servers (for login) Linux (RHEL) ● ● CS: 1128 servers Linux (CentOS) ● ● total: SPEC CINT 2000 Rate. equivalent to 8.7THz CPU will be increased by x2.5 (i.e. to SPEC CINT 2000 Rate) in enclosure = 10 nodes / 7U space 1 rack = 50 nodes

3/27/2007Grid Efforts in Belle23 Storage System (Disk) ● ● Total 1PB with 42 file servers (1.5PB in 2009) ● ● SATAII 500GB disk x ~2000 (~1.8 failure/day ?) ● ● 3 types of RAID (to avoid problems) ● ● HSM = 370 TB non-HSM = 630 TB ADTX ArrayMasStor LP 15drive/3U/7.5TB Nexan SATA Beast 42drive/4U/21TB SystemWorks MASTER RAID B drive/3U/8TB (made in Taiwan)

3/27/2007Grid Efforts in Belle24 Storage System (Tape) ● ● Backup ● ● 90TB + 12drv + 3srv ● ● LTO3 400GB/volume ● ● NetVault ● ● HSM: PetaSite (SONY) ● ● 3.5PB + 60drv + 13srv ● ● SAIT 500GB/volume ● ● 30MB/s drive ● ● Petaserve