1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

1 AMY Detector (eighties) A rather compact detector.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
Alain Romeyer - Dec Grid computing for CMS What is the Grid ? Let’s start with an analogy How it works ? (Some basic ideas) Grid for LHC and CMS.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
IHG (Innsbrucker Hochenergiephysikgruppe) The High Energy – Particle Physics group (IHG) at the Institute for Experimental Physics Innsbruck is involved.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
1 Tier-2 Data Center at Kyungpook National University for LHC-CMS Experiment HyangKyu Park Center for High Energy Physics Kyungpook National University.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
HEP Grid Computing in China Gang Workshop on Future PRC-U.S. Cooperation in High Energy Physics.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
…building the next IT revolution From Web to Grid…
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
A. Hoummada May Korea Moroccan ATLAS GRID MAGRID Abdeslam Hoummada University HASSAN II Ain Chock B.P Maarif CASABLANCA - MOROCCO National.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
Tiers and GRID computing 김 민 석 ( 성균관대 )
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
LHC Computing in Korea Seyong Kim For the 1 st Korea-CERN meeting.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
KISTI activities and plans Global experiment Science Data hub Center Jin Kim LHCOPN-ONE Workshop in Taipei1.
Collaborative Research Projects in Australia: High Energy Physicists Dr. Greg Wickham (AARNet) Dr. Glenn Moloney (University of Melbourne) Global Collaborations.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Performance measurement of transferring files on the federated SRB
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
UK GridPP Tier-1/A Centre at CLRC
Particle Physics at KISTI
LHC Data Analysis using a worldwide computing grid
e-Science for High Energy Physics
Data Processing for CDF Computing
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu, Oct. 16~18, 2008

High Energy Physics (HEP) High Energy Physics (HEP) is the study of the basic elements of matter and the forces acting among them. People have long asked, “What is the world made of ?” “What holds it together ?”

Major HEP Laboratories in the World US FNAL US BNL Europe CERN Germany DESY Japan KEK US SLAC

Major HEP Experiments No. of Collaborators No. of Countries Data Volume (comments) Belle (KEK, Japan) ~ Peta-Byte (end in~2010) CDF (FNAL,USA) ~ Peta-Byte (end in 2010) D0 (FNAL, USA) ~ Peta-Byte (end in 2010) CMS (CERN, Europe) ~200036~10 Peta-Byte/yr (start 2008) HEP collaborations are increasingly international.

CMS Computing

TOTEM LHCb: B-physics ALICE CMS Atlas Physicists 250+ Institutes 60+ Countries Challenges: Analyze petabytes of complex data cooperatively Harness global computing, data & network resources LHC-b Large Hadron Collider where the web was born

LHC started just Now !

9  “The CMS detector is essentially 100-megapixel digital camera that will take 40 M pictures/s of particle interaction.” by Dan Green.  The High Level Trigger farm writes RAW events with 1.5 MB at a rate of 150 Hz. 1.5 MB x 150/s x 10 7 s ≈ 2.3 Peta-Byte/yr

10 LEP & LHC in Numbers LEP (1989/2000) CMS (2008) Factor Nr. Electronic Channels   x 10 2 Raw data rate  100 GB  s  TB  s x 10 4 Data rate on Tape  1 MB  s  100 MB  s x 10 2 Event size  100 KB  1 MB x 10 Bunch Separation 22  s 25 ns x 10 3 Bunch Crossing Rate 45 KHz 40 MHz x 10 3 Rate on Tape 10 Hz 100 Hz x 10 Analysis 0.1 Hz (Z 0, W) Hz (Higgs) x 10 5 x 1000

11 The LHC Data Grid Hierarchy KNU ~2000 physicists, 40 countries ~10s of Petabytes/yr by 2010 ~1000 Petabytes in < 10 yrs?

12 Service and Data Hierarchy  Tier-0 at CERN – Data acquisition & reconstruction of raw data – Data Archiving (Tape & Disk storage) – Distribution of raw & recon data -> Tier-1 centers  Tier-1 – Regional & global serivces ASCC (Taiwan), CCIN2P3 (Lyon), FNAL (Chicago), GridKA (Kalsruhe), INFN-CNAF (Bologna), PIC (Barcelona), RAL (Oxford) – Data Archiving (Tape & Disk storage) – Reconstruction – Data Heavy Analysis  Tier-2 – ~40 sites (including Kyungpook National Univ.) – MC production – End-user Analysis (Local community use)

13 LCG_KNU LHC Computing Grid(LCG) Farms

14 Current Tier-1 Computing Resources Requirements by 2008 CPU: 2500 kSI2k Disk: 1.2 PB Tape: 2.8 PB WAN: At least 10 Gbps

15 Current Tier-2 Computing Resources Requirements by 2008 CPU: 900 kSI2k Disk: 200 TB WAN: At least 1 Gbps. 10 Gbps is recommended

16 CMS Computing in KNU KNU CPU (kSI2k)400 Disk Storage (TB) 117 -> 150 (12 of Disk Servers) Tape (TB)46 WAN (Gbps) 12 -> 20 Grid SystemLCG SupportHigh Energy CMS Computing RoleTier-2

17 TEIN2 North/ORIENT PH VN TH ID MY 45 3 x G(622M) North America (via TransPAC2) (via GLORIAD) EU M+1G 4 x 155 AU HK SG JP CN KR KREONET/ GLORIAD KR-CN KOREN/APII KR-JP APII/TEIN2, GLORIAD ( ) TEIN2 South G(2G) 10G Courtesy by Prof. D. Son and Dr. B.K. Kim

18 CMS Computing Activities in KNU  Running Tier-2  Participating in LCG Service Challenges, CSAs every year as Tier-2 – SC04 (Service Challenge): Jun. ~ Sep.,2006 – CSA06 (Computing, Software & Analysis): Sep. ~ Nov., 2006 – Load Test 07: Feb ~ Jun., 2007 – CSA07: Sep. ~ Oct., 2007 – Pre CSA08: Feb.,2008 – CSA08: May~June, 2008  Testing, Demonstrating, Bandwidth Challenging – SC05, SC06, SC07  Preparing physics analyses – RS Graviton search – Drell-Yan process study  Configured Tier3 and supporting Tier3’s (Konkuk U.)

19 CSA07 (Computing, Software & Analysis)  A “50% of 2008” data challenge of the CMS data handling – Schedule: July-Aug. (preparation), Sep. (CSA07 start)

CSA08 (Computing, Software & Analysis)

21 Summary of CSA 07

Transferred Data Volume from Tier-1 to KNU during CSA08

Job Submission Activity during CSA08 MIT DESY KNU Activity# of Sub. JobsSuccessSuccess Rate (%) Analysis7,9691, CCRCPG1,2354, Total9,2046,

Transferred Data Volume from Tier-1 to KNU

Job Submission Activity from Apr. to Oct. MIT DESY KNU System Upgrade Down time stem Activity# of Sub. JobsSuccessSuccess Rate (%) CCRCPG1,2351, JobRobot65,14049, Analysis13,3206, Production Total80,43458,

26 Configuring the Tier-3 with KonKuk University

27 Elements of Data Grid System  Data GRID Service (or Supported) Nodes: – glite-UI (User Interface) – glite-BDII (Berkeley Database Information Index) – glite-LFC_mysql (LCG file catalog) – glite-MON (Monitor) – glite-PX (Proxy server) – glite-SE_dcache (Storage Element) – glite-RB (Resource Broker, Job management) – glite-CE_torque (Computing element)  Worker node: data process and computation  Storage Element (File server): Store a large amount of data. 8 Nodes

28 Tier-3 Federation 10G 40G 20G Seoul Daejeon Suwon Daegu Busan Gwangju 고려대 시립대 고려대 시립대 전남대 동신대 전남대 동신대 경상대 강원대 충북대 서남대 KOREN CMS Institution 경북대경북대 건국대건국대 성균관성균관 전북대전북대 Resource: 40 CPU’s & 10 TB

Summary  HEP has pushed against the limits of networking and computer technologies for decades.  High Speed Network is vital for HEP researches.  LHC experiment has started just now, and will produce ~10 PB/yr of data soon. We may expect 1 Tbps in less than a decade.  HEP groups in US, EU, Japan, China and Korea are collaborating for advanced net projects and Grid computing.