Belle II Physics Analysis Center at TIFR

Slides:



Advertisements
Similar presentations
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 11 Managing and Monitoring a Windows Server 2008 Network.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
1 port BOSS on Wenjing Wu (IHEP-CC)
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
CORE 2: Information systems and Databases CENTRALISED AND DISTRIBUTED DATABASES.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
The GridPP DIRAC project DIRAC for non-LHC communities.
EMTTS UAT Day1 & Day2 Powered by:. Topics CoversTopics Remaining Comparison Network Infrastructure Separate EP Hosting Fault Tolerance.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
The GridPP DIRAC project DIRAC for non-LHC communities.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Storage discovery in AliEn
SPS Spotlight Series October 2014
Dynamic Extension of the INFN Tier-1 on external resources
Grid site as a tool for data processing and data analysis
Overview of the Belle II computing
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Bulk production of Monte Carlo
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Southwest Tier 2.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
WLCG Collaboration Workshop;
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Building an Elastic Batch System with Private and Public Clouds
Production Manager Tools (New Architecture)
Pete Gronbech, Kashif Mohammad and Vipul Davda
The LHCb Computing Data Challenge DC06
Presentation transcript:

Belle II Physics Analysis Center at TIFR Prashant D. Shingade (on behalf of the TIFR Belle team) Annual DHEP Meeting April 7-8, 2016

Belle II Computing Grid The Belle II experiment will operate at SuperKEKB accelerator in Tsukuba, Japan. Computing resources needed to process and analyze the data recorded by the experiment will come from remote computing centers, in addition to the one at KEK, with help of Grid Computing technology, termed “Belle II Computing Grid” The Belle II Computing Grid classified in Tiers according to functionality Tier0 is at KEK (Raw Data Center) responsible for holding RAW data from the experiment. Tier1 are regional data centers responsible for regional support for Grid operations and storing copies of data Tier2 are Monte-Carlo (MC) Data Production sites, where end user analysis is also done

Belle II Physics Analysis Center at TIFR Our primary aim is to contribute as a Tier2 Center within India, termed Belle II Physics Analysis Center 1) Provision of managed disk storage for files and databases 2) Provision of access to the stored data by other centers via Belle II Computing Grid 3) Provision of computing cores for operations of an end user analysis 4) Ensure network bandwidth and services for data exchange with other computer centers Minimal Computing resources required for Tier2 Center to start with. Resource Minimum level CPU (HEP-SPEC06) 500 Disk(Tbytes) 50 Network(Gbps) 1

Towards a Tier2 Center Belle II Physics Analysis center at TIFR is shaping to be Tier2 Center. As for our plan, we started with installing local batch system + SSH (or DIRAC slave) for job submission, contributing to GRID system DIRAC (Distributed Infrastructure with Remote Agent Control) We have 1 server with dual intel Xeon E5-2640 contributing 12 cores using DIRAC Client 4 more servers are in process of being added to the same DIRAC system A total of 72 cores to contribute and 19TB of basic storage to offer

Towards a Tier2 Center We will migrate to GRID site with Computing Element (CE), Storage Element (SE) and monitoring servers, once we have obtained experience of distributed computing system operation and after purchase of few more machines (as minimum of 100 Cores will be required to contribute) Execution of the job was successfully finished and properly updated the output to remote SE So far, about 40k jobs have been executed successfully We have checked and have done all the settings on the server, the machine is installed with torque-4.2.8, squid-3.4.9 and CernVM-FS version 2.1.19. We can mount filesystem from "belle.cern.ch" under "/cvmfs" folder using CVMFS. We have DIRAC user, submitting Grid Jobs to the server. We have all the ports opened for outbound on firewall. Up and running since November 2014

Looking Ahead Task in queue for Belle II Physics Analysis center at TIFR We will be shifting the servers to a more secure, structured and controlled environment National Knowledge Network (NKN) link to used for better connectivity Purchasing additional required servers to build up the Tier2 facility Installation and registering the site to the Grid