1 Tier-2 Data Center at Kyungpook National University for LHC-CMS Experiment HyangKyu Park Center for High Energy Physics Kyungpook National University.

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

1 AMY Detector (eighties) A rather compact detector.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
P5 Meeting - Jan , US LHC University M&O Personnel University with major hardware responsibility at CERN Based on > 10 years of US Zeus experience.
Alain Romeyer - Dec Grid computing for CMS What is the Grid ? Let’s start with an analogy How it works ? (Some basic ideas) Grid for LHC and CMS.
IHG (Innsbrucker Hochenergiephysikgruppe) The High Energy – Particle Physics group (IHG) at the Institute for Experimental Physics Innsbruck is involved.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
L. Taylor 2 March CMS Centres Worldwide See paper (attached to agenda) “How to create a CMS My Institute” which addresses the questions:
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
DESY Participation in an External Experiment Joachim Mnich PRC Meeting
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
A. Hoummada May Korea Moroccan ATLAS GRID MAGRID Abdeslam Hoummada University HASSAN II Ain Chock B.P Maarif CASABLANCA - MOROCCO National.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
Outline  Higgs Particle Searches for Origin of Mass  Grid Computing  A brief Linear Collider Detector R&D  The  The grand conclusion: YOU are the.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
Korea-CMS representative: Young-Il SKKU 6 CMS member institutes + 2 non-member institutes are active: 15 professors, 19 researchers, 20 graduate.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Grid site as a tool for data processing and data analysis
Update on Plan for KISTI-GSDC
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
UK GridPP Tier-1/A Centre at CLRC
LHC Collisions.
Particle Physics at KISTI
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

1 Tier-2 Data Center at Kyungpook National University for LHC-CMS Experiment HyangKyu Park Center for High Energy Physics Kyungpook National University Daegu, Korea LHC Physics Workshop KonKuk U., Aug. 10~12, 2010

ROOT Tutorial (Tier-2 Summer School)  일시 : 8 월 12 일 14:00~18:00  장소 : PC 실 참여를 원하시는 분은 등록데스크 앞에서 모임

3  “The CMS detector is essentially 100-megapixel digital camera that will take 40 M pictures/s of particle interaction.” by Dan Green.  The High Level Trigger farm writes RAW events with 1.5 MB at a rate of 150 Hz. 1.5 MB x 150/s x 10 7 s ≈ 2.3 Peta-Byte/yr

4 LEP & LHC in Numbers LEP (1989/2000) CMS (2009) Factor Nr. Electronic Channels   x 10 2 Raw data rate  100 GB  s  TB  s x 10 4 Data rate on Tape  1 MB  s  100 MB  s x 10 2 Event size  100 KB  1 MB x 10 Bunch Separation 22  s 25 ns x 10 3 Bunch Crossing Rate 45 KHz 40 MHz x 10 3 Rate on Tape 10 Hz 100 Hz x 10 Analysis 0.1 Hz (Z 0, W) Hz (Higgs) x 10 5

5 The LHC Data Grid Hierarchy Outside/CERN Ratio Larger; Expanded Role of Tier1s & Tier2s: Greater Reliance on Networks KNU ~2000 physicists, 40 countries ~10s of Petabytes/yr ~1000 Petabytes in < 10 yrs?

6 Service and Data Hierarchy  Tier-0 at CERN – Data acquisition & reconstruction of raw data – Data Archiving (Tape & Disk storage) – Distribution of raw & recon data -> Tier-1 centers  Tier-1 – Regional & global serivces ASCC (Taiwan), CCIN2P3 (Lyon), FNAL (Chicago), GridKA (Kalsruhe), INFN-CNAF (Bologna), PIC (Barcelona), RAL (Oxford) – Data Archiving (Tape & Disk storage) – Reconstruction – Data Heavy Analysis  Tier-2 – ~40 sites (including Kyungpook National Univ.) – MC production – End-user Analysis (Local community use)

7 KNU is registered as Tier-2 in CMS.

8

9 Current Tier-2 Computing Resources CPU

10 CMS Computing Resources in KNU CPU (kSI2k)470 (~350 cpus) Disk Storage (TB)190 (14 of Disk Servers) Tape (TB)46 WAN (Gbps)20 (KREONET+KOREN) Grid SystemLCG SupportHigh Energy CMS Computing Role Tier-2

11 TEIN2 North/ORIENT PH VN TH ID MY 45 3 x G(622M) North America (via TransPAC2) (via GLORIAD) EU M+1G 4 x 155 AU HK SG JP CN KR KREONET/ GLORIAD KR-CN KOREN/APII KR-JP APII/TEIN3, GLORIAD TEIN3 South G(2G) 10G Courtesy by Prof. D. Son and Dr. B.K. Kim

12 CMS Computing Activities in KNU  Running Tier-2  Participating in LCG Service Challenges, CSAs every year as Tier-2 – SC04 (Service Challenge): Jun. ~ Sep.,2006 – CSA06 (Computing, Software & Analysis): Sep. ~ Nov., 2006 – Load Test 07: Feb ~ Jun., 2007 – CSA07: Sep. ~ Oct., 2007 – Pre CSA08: Feb.,2008 – CSA08: May~June, 2008 – STEP09: June, 2009  Testing, Demonstrating, Bandwidth Challenging – SC05, SC06, SC07, SC08,SC09  Supporting physics analyses – RS Graviton search – Higgs search – W’ search

13 Preparing Physics Analyses using KNU_T2  Study on Randall-Sundrum Graviton with the mode G * →ZZ →μ + μ - μ + μ - – Generation of total 800 k events: 80 sets of MCs (16 points x 5 parameters)  Study on Drell-Yan Process – MS student thesis topic – It took only 1 night for generation of 200 k MC events M G* =500 GeV

One Year Performances in KNU-Tier-2 (I) MIT DESY KNU MIT UK-RAL DESY KNU MIT DESY

One Year Performances in KNU-Tier-2 (II) MIT DESY KNU KNU -> T1

Recent Performances in KNU-Tier-2 (III) MIT DESY KNU T1 -> KNU

Recent Performances in KNU-Tier-2 (IV) ~90% efficiency

Recent Performances in KNU-Tier-2 (V)

Establishing CMS Center (I) MIT DESY Current worldwide CMS Centers

Establishing CMS Center (II) MIT DESY  Communication focal point for students, postdocs & faculties.  CMS operations: – Sub-detector data quality monitoring – Data analysis – CMS computing operation – Remote shift  Outreach Activities: – Increase CMS visibility – Attract new students – Tour and discussions with physicists – Live display, posters and other exhibits

Media Event for First 7 TeV Collisions in March 30 at the CMS Center MIT DESY

22 Future Plan  Manpower: Total 3 FTE  Computing Resources  Dedicated to the Exotica Group & QCD but strong desire to work with Higgs and SUSY group  Local user supports for physics analyses – We are accepting a proposal to use our Tier-2 resources Year CPU (kSI2k)710 (470) Disk (TB)205 (190) Network (Gbps) 20

23 Summary  We are pretty much ready for CMS physics analysis  Your support is vital for a success of Tier-2 operation.  LHC experiment has started, and produce “10 PB/yr” soon. It’s time to seriously think about LHC data center in Korea that will offer another big step for KR-HEP program.