Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.

Slides:



Advertisements
Similar presentations
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Advertisements

Belle II vs. LHCb Kihyeon Cho High Energy Physics Team KISTI (Korea Institute of Science and Technology Information) March 11, 2011 BSM mini-workshop KIAS,
Heavy Flavor Physics in LHC era 조기현, 남수현, 김정현, 김영진, 배태길 High Energy Physics Team KISTI (Korea Institute of Science and Technology Information) October.
Report on the Belle II Data Handling Group Kihyeon Cho (As a chair, on behalf of the Belle II DH Group) High Energy Physics Team KISTI (Korea Institute.
Belle II Data Handling System Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team KISTI (Korea Institute of Science.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
Ch 4. The Evolution of Analytic Scalability
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
W.Smith, U. Wisconsin, ZEUS Computing Board Zeus Executive, March 1, ZEUS Computing Board Report Zeus Executive Meeting Wesley H. Smith, University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne Computing Model for SuperBelle Outline Scale and Motivation Definitions.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Collider Physics based on e-Science Paradigm of Experiment-Computing-Theory Kihyeon Cho, Junghyun Kim and Soo-hyeon Nam High Energy Physics Team KISTI.
The Advanced Data Searching System The Advanced Data Searching System with 24 February APCTP 2010 J.H Kim & S. I Ahn & K. Cho on behalf of the Belle-II.
Report on Belle II Data Handling Group Kihyeon Cho High Energy Physics Team KISTI (Korea Institute of Science and Technology Information) June 16~18, 2010.
Heavy Flavor Physics in the LHC era 조기현, 남수현, 김정현, 김영진, 배태길 고에너지물리연구팀 KISTI (Korea Institute of Science and Technology Information) April 13-15, 2011 The.
DØ Data Handling & Access The DØ Meta-Data Browser Pushpa Bhat Fermilab June 4, 2001.
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Belle-II Data Handling working group status Junghyun Kim, Daejung Kong Kihyeon Cho* (High Energy Physics Team, KISTI) 1 Korea Belle II meeting SNU, Seoul,
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Managing Data DIRAC Project. Outline  Data management components  Storage Elements  File Catalogs  DIRAC conventions for user data  Data operation.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Andrea Valassi (CERN IT-DB)CHEP 2004 Poster Session (Thursday, 30 September 2004) 1 HARP DATA AND SOFTWARE MIGRATION FROM TO ORACLE Authors: A.Valassi,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
TAGS in the Analysis Model Jack Cranshaw, Argonne National Lab September 10, 2009.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
Collider Physics based on e-Science Paradigm Kihyeon Cho (On behalf of High Energy Physics KISTI) APCTP YongPyong 2010 YongPyong Resort, Korea,
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
HEP Group Activity Kihyeon Cho (KISTI) KISTI-Belle II Open Meeting, KISTI, Daejeon, Korea Monday, February 23, 2014.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Design of AMGA Web Application Based on Belle II User Requirements Taesang Huh, Geunchul Park, Jae-Hyuck Kwak, Seungwoo Rho, Soonwook Hwang, JungHyun Kim,
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
PI: Kihyeon Cho & Soonwook Hwang Sponsor: Creative project by KISTI
on behalf of the Belle II Computing Group
Overview of the Belle II computing
Belle II Physics Analysis Center at TIFR
SuperB and its computing requirements
INFN-GRID Workshop Bari, October, 26, 2004
Summary Set up testing environment on KISTI AMGA server
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Kihyeon Cho Supercomputing Center KISTI
ALICE Computing Upgrade Predrag Buncic
R. Graciani for LHCb Mumbay, Feb 2006
Ch 4. The Evolution of Analytic Scalability
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Current Grid System in Belle
Collider Physics based on e-Science Paradigm
Belle II experiment Requirement of data handling system Belle II Metadata service system Data Cache system at Belle II experiment Summary.
Presentation transcript:

Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team KISTI (Korea Institute of Science and Technology Information) October 18~22, 2010 CHEP 2010, Academia Sinica, Taipei, Taiwan 1

Kihyeon Cho Contents Belle II Experiment Belle II Data Handling System Meta-data system Data cache system To test Large Scale Data Handling With Belle Data With Belle II Data (Random data) The interaction between HLT and Storage Summary 2

Kihyeon Cho BelleContentBelle II 1998~2010Time Schedule2014~ 1 ab -1 Luminosity50 ab -1 1 Billion events50 Billion CP measurementGoalNew Physics Belle Belle II 3 Belle vs. Belle II To handle 50 times more data and to use grids ⇒ New data handling system

Kihyeon Cho KEK Grid Site Local Resources Ntuple Analysis Ntuple Analysis MC Production And Ntuple Production MC Production And Ntuple Production Raw Data Storage And Processing Raw Data Storage And Processing Cloud MC Production (optional) MC Production (optional) AMGA DIRAC UI Tape CPU Disk Raw Data mDST Data mDST MC Ntuples Data Tools Client gbast2 Belle II computing model

Kihyeon Cho Data Handling Outlines 5 KEK Grid sites plan DIRAC

Kihyeon Cho To construct the DH system for Belle II experiment To improve the scalability and performance To run based on grid farm ⇒ AMGA (Arda Metadata Catalog for Grid Application) AMGA Data Cache 6 Belle II metadata system DIRAC

Kihyeon Cho We make the simple data tool which is not based on database. 7 Belle II data cache system Event-driven meta-data catalog ⇒ Condition-driven meta-data catalog

Kihyeon Cho Large Scale data DH test with Belle Data We perform searching for the interesting files with a table of meta-system and changing number of parallel processing. The linearity of search is stable up to 50 parallel simultaneous processing. 8 # of files: 2013 files # of events: 12 M events # of luminosity: 5792 pb -1 What queries? - run #, exp#, stream#... Input Output

Kihyeon Cho 9 Large Scale data DH test with Belle II Data (Random generating) With a table and multi-processing Generating time: 400 files/sec With 30 multi-tables and multi-processing Generating time: 400 files/sec Input: 70,000 files (140TB) The linearity of search is stable up to 50 parallel simultaneous processing. It is almost same between using a table and using 30 multi-tables.

Kihyeon Cho KEK Grid Site Local Resources Ntuple Analysis Ntuple Analysis MC Production And Ntuple Production MC Production And Ntuple Production Raw Data Storage And Processing Raw Data Storage And Processing Cloud MC Production (optional) MC Production (optional) Detector DAQ HLT LFC AMGA DIRAC UI Tape CPU Disk Raw Data mDST Data mDST MC Ntuples Data Tools Client gbast2 The interface between HLT and Storage => To apply AMGA We assume two files/sec for both reading and writing for AMGA. Read-write optimization for meta-data Generating for writing only 400 files/sec To test reading performance for 1Hz, 2Hz, 10Hz, 50Hz and 100 Hz 30kHz 6kHz

Kihyeon Cho Plan DIRAC development env. ~ 1 month  Data registration with AMGA ~ 3 months AMGA integration ~ 3 months Data tools ~ 6 months DAQ integration ~ 6 months 11

Kihyeon Cho Summary At the Belle II experiment, in order to handle 50 times more data of Belle, we have constructed Belle II Data Handling system based on grids. We have tested the Large Scale DH with Belle Data Belle II Data (Random) We are applying AMGA at HLT. We are also integrating AMGA with DIRAC. 12

Thank you. 13