Tiers and GRID computing 2011. 8. 12 김 민 석 ( 성균관대 )

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Stuart Wakefield Imperial College London1 How (and why) HEP uses the Grid.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Grid site as a tool for data processing and data analysis
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
A highly reliable data center network topology Tier 1 at JINR
Status and Prospects of The LHC Experiments Computing
Dagmar Adamova, NPI AS CR Prague/Rez
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
UK GridPP Tier-1/A Centre at CLRC
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
New strategies of the LHC experiments to meet
CERN, the LHC and the Grid
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHC Computing Grid Visit of Professor Andreas Demetriou
The LHCb Computing Data Challenge DC06
Presentation transcript:

Tiers and GRID computing 김 민 석 ( 성균관대 )

CERN Physics ● Particles, energy and matter to understand more about how our universe works Education ● Find new ideas on bringing modern physics into the classroom Computing

Computing for Physics Research Data Technologies ● Data storing and management distributed, parallel and cloud computing network Data Analysis ● Algorithm and tools simulation, reconstruction and visualization called the Grid

Computing system Grids (a super virtual computer) ● Have computers connected to a network by a conventional network interface, such as Ethernet ● Geographically distributed computing ● Parallel computing for computation Worldwide LHC Computing Grid (WLCG) A supercomputer ● Has many processors connected by a local-speed computer bus, which is a subsystem that transfers data between components Europe cern

Tier sites ● Tier-0: CERN Computer Centre ● Tier-1 (11 sites) ● Tier-2 (140 sites) universities and other scientific institutes ● Tier-3 (lots) local clusters in a university department or an individual PC Why tiered?

Tier functions ● Tier-0: CERN Computer Centre first accepts RAW data repacks the RAW data into primary datasets archives the RAW data to tape distributes into T1 (i.e, two copies) prompt calibration to get calibration constants prompt first pass reconstruction (RECO data) distributes into T1 ● T1 re-reconstruction, skimming, calibration distributes into other T1, CERN, T2 stores data ● T2 grid-based analysis Monte Carlo (MC) simulation T0: 데이터 처리 ① T1: 데이터 재처리 및 저장 ② T2: 데이터 분석 ③

데이터처리데이터저장 데이터용량데이터분석 It ’ s time to talk about...

블랙박스 색깔 ? 블랙 재채기 침속도 ? 시속 80km 이상 LHC 데이터용량 ? 많다 OX Quiz

➀ 억수로 많다 ➁ 아따 거시기 겁나게 많다 ➂ 오만 군데 천지뻬까리다 LHC 데이터용량 ?

CERN in our universe Largest and biggest ● 세계에서 가장 큰 기계 Fastest ● 가장 빠른 레이스트랙 Hottest spots ● 양성자충돌온도 = 10 5 × 태양중심 Coldest and emptiest space ● 영하 도 (1.9 K) Most powerful computer system ● 10 5 dual layer DVDs/year = PetaByte/yr

백만 십억 조 천조

● ~3000 Physicists, ~40 countries ● Tens of PetaBytes/yr ● ExaBytes in ~10 years PByte/se c LHC/CMS Tiered Data System PByte PBytes/sec MBytes/sec

● ~3000 Physicists, ~40 countries ● Tens of PetaBytes/yr ● ExaBytes in ~10 years PByte/se c LHC/CMSTiered DataSystem PByte PBytes/sec trigger MBytes/sec

GBytes per second A record of data on backup tape with a transfer rate of 1.1 GB/s for several hours ( 백업속도 ) = Recording a movie on DVD every 4 s A record of data transfer over 10,000 km between CERN and California, with a throughput of 2.38 GB/s for over an hour ( 전송속 도 ) = Sending 200 DVD films in an hour

Towers 2 × 10 9 events/yr 1 event = 1.6 MB 3.2 PB/yr from CMS 8.5 GB DVD (1-side, 2- layers) DVD thickness = 1.2 mm How many DVDs? 3.2 PB ÷ 8.5 GB ≈ 4 만 DVDs How tall? 4 만 DVDs × 1.2 mm ≈ 500 m

Tier 2 in Korea CMS 경북대 (KNU) ALICE 한국과학기술정보연구원 (KISTI)

GRID Real Time Monitoring KOREACERN

Tier services by middleware Storage Element (SE) Data access protocols & interfaces Computing Element (CE) Job-manager and worker nodes Security User interface Information service Workload management Software components

Bonus: Distributed analysis

Car Driving 시동 ( 출발 ) 가속 ( 진행 ) 감속 ( 정지 ) Map

Data analysis 분석시작분석중분석끝 Configuration

Distributed analysis 분석시작분석중분석끝

Your interest: 새로운 물리현상 발견 Vs. 미래 인터넷 환경 변화 과학자들간의 정보교환을 위해 개발된 기술 ➔ 생활의 혁명

2020 그들이 온다