Download presentation
Presentation is loading. Please wait.
Published byAlberta Ray Modified over 9 years ago
1
1 Tier-2 Data Center at Kyungpook National University for LHC-CMS Experiment HyangKyu Park Center for High Energy Physics Kyungpook National University Daegu, Korea LHC Physics Workshop KonKuk U., Aug. 10~12, 2010
2
ROOT Tutorial (Tier-2 Summer School) 일시 : 8 월 12 일 14:00~18:00 장소 : PC 실 참여를 원하시는 분은 등록데스크 앞에서 모임
3
3 “The CMS detector is essentially 100-megapixel digital camera that will take 40 M pictures/s of particle interaction.” by Dan Green. The High Level Trigger farm writes RAW events with 1.5 MB at a rate of 150 Hz. 1.5 MB x 150/s x 10 7 s ≈ 2.3 Peta-Byte/yr
4
4 LEP & LHC in Numbers LEP (1989/2000) CMS (2009) Factor Nr. Electronic Channels 100 000 10 000 000 x 10 2 Raw data rate 100 GB s 1 000 TB s x 10 4 Data rate on Tape 1 MB s 100 MB s x 10 2 Event size 100 KB 1 MB x 10 Bunch Separation 22 s 25 ns x 10 3 Bunch Crossing Rate 45 KHz 40 MHz x 10 3 Rate on Tape 10 Hz 100 Hz x 10 Analysis 0.1 Hz (Z 0, W) 10 -6 Hz (Higgs) x 10 5
5
5 The LHC Data Grid Hierarchy Outside/CERN Ratio Larger; Expanded Role of Tier1s & Tier2s: Greater Reliance on Networks KNU ~2000 physicists, 40 countries ~10s of Petabytes/yr ~1000 Petabytes in < 10 yrs?
6
6 Service and Data Hierarchy Tier-0 at CERN – Data acquisition & reconstruction of raw data – Data Archiving (Tape & Disk storage) – Distribution of raw & recon data -> Tier-1 centers Tier-1 – Regional & global serivces ASCC (Taiwan), CCIN2P3 (Lyon), FNAL (Chicago), GridKA (Kalsruhe), INFN-CNAF (Bologna), PIC (Barcelona), RAL (Oxford) – Data Archiving (Tape & Disk storage) – Reconstruction – Data Heavy Analysis Tier-2 – ~40 sites (including Kyungpook National Univ.) – MC production – End-user Analysis (Local community use)
7
7 https://vocms08.cern.ch/sitedb/sitelist/ KNU is registered as Tier-2 in CMS.
8
8 http://t2-cms.knu.ac.kr
9
9 Current Tier-2 Computing Resources CPU
10
10 CMS Computing Resources in KNU CPU (kSI2k)470 (~350 cpus) Disk Storage (TB)190 (14 of Disk Servers) Tape (TB)46 WAN (Gbps)20 (KREONET+KOREN) Grid SystemLCG SupportHigh Energy CMS Computing Role Tier-2
11
11 TEIN2 North/ORIENT 622 155 45 155 PH VN TH ID MY 45 3 x 622 2.5G(622M) North America (via TransPAC2) (via GLORIAD) EU 622 622M+1G 4 x 155 AU HK SG JP CN KR KREONET/ GLORIAD KR-CN KOREN/APII KR-JP APII/TEIN3, GLORIAD TEIN3 South 622 10G(2G) 10G Courtesy by Prof. D. Son and Dr. B.K. Kim
12
12 CMS Computing Activities in KNU Running Tier-2 Participating in LCG Service Challenges, CSAs every year as Tier-2 – SC04 (Service Challenge): Jun. ~ Sep.,2006 – CSA06 (Computing, Software & Analysis): Sep. ~ Nov., 2006 – Load Test 07: Feb ~ Jun., 2007 – CSA07: Sep. ~ Oct., 2007 – Pre CSA08: Feb.,2008 – CSA08: May~June, 2008 – STEP09: June, 2009 Testing, Demonstrating, Bandwidth Challenging – SC05, SC06, SC07, SC08,SC09 Supporting physics analyses – RS Graviton search – Higgs search – W’ search
13
13 Preparing Physics Analyses using KNU_T2 Study on Randall-Sundrum Graviton with the mode G * →ZZ →μ + μ - μ + μ - – Generation of total 800 k events: 80 sets of MCs (16 points x 5 parameters) Study on Drell-Yan Process – MS student thesis topic – It took only 1 night for generation of 200 k MC events M G* =500 GeV
14
One Year Performances in KNU-Tier-2 (I) MIT DESY KNU MIT UK-RAL DESY KNU MIT DESY
15
One Year Performances in KNU-Tier-2 (II) MIT DESY KNU KNU -> T1
16
Recent Performances in KNU-Tier-2 (III) MIT DESY KNU T1 -> KNU
17
Recent Performances in KNU-Tier-2 (IV) ~90% efficiency
18
Recent Performances in KNU-Tier-2 (V)
19
Establishing CMS Center (I) MIT DESY Current worldwide CMS Centers
20
Establishing CMS Center (II) MIT DESY Communication focal point for students, postdocs & faculties. CMS operations: – Sub-detector data quality monitoring – Data analysis – CMS computing operation – Remote shift Outreach Activities: – Increase CMS visibility – Attract new students – Tour and discussions with physicists – Live display, posters and other exhibits
21
Media Event for First 7 TeV Collisions in March 30 at the CMS Center MIT DESY
22
22 Future Plan Manpower: Total 3 FTE Computing Resources Dedicated to the Exotica Group & QCD but strong desire to work with Higgs and SUSY group Local user supports for physics analyses – We are accepting a proposal to use our Tier-2 resources Year 2010201120122013 CPU (kSI2k)710 (470)8009001000 Disk (TB)205 (190)230250 Network (Gbps) 20
23
23 Summary We are pretty much ready for CMS physics analysis Your support is vital for a success of Tier-2 operation. LHC experiment has started, and produce “10 PB/yr” soon. It’s time to seriously think about LHC data center in Korea that will offer another big step for KR-HEP program.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.