International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

23rd April 2002HEPSYSMAN April Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Setting up Small Grid Testbed
NIKHEF Testbed 1 Plans for the coming three months.
Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep ,
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Working Group Meeting (McGrew/Toki) Discussion Items (focused on R&D software section) Goals of Simulations Simulation detector models to try Simulation.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
YuChul Yang Oct KPS 2006 가을 EXCO, 대구 The Current Status of KorCAF and CDF Grid 양유철, 장성현, 미안 사비르 아메드, 칸 아딜, 모하메드 아즈말, 공대정, 김지은,
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
The Texas High Energy Grid (THEGrid) A Proposal to Build a Cooperative Data and Computing Grid for High Energy Physics and Astrophysics in Texas Alan Sill,
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
Application of the EDG Testbed Bockjoo Kim*, Soo-Bong Kim Seoul National University (SNU) Kihyeon Cho, Youngdo Oh, Dongchul Son Center for High Energy.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CDF computing in the GRID framework in Santander
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Data Processing for CDF Computing
Presentation transcript:

International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department of Physics SungKyunKwan University (SKKU) Suwon, Korea (On behalf of the HEP Data Grid Working Group)

CDF II (Collider Detector at Fermilab) International Workshop on HEP Data Grid Aug 23, 2003, KNU Proton-Antiproton Collider Experiment Collision Rate : 2.5 MHz Event Rate to Tape : ~300Hz Average Data Size of an Event : ~100KB Expected Size of Real Data per year : 10 9 Events * 100KB = 100 TB Monte Carlo Data = Real Data * 3 PByte Data Storage and Data Processing Computing Power is required CDFII Central Analysis Farm (CAF)

CDF CAF/Korea DCAF International Workshop on HEP Data Grid Aug 23, 2003, KNU Korea DCAF ( Decentralized Analysis Farm) at KNU A Linux Cluster with PBS Batch System at KNU: 23 CPU’s Implementation of SAM Grid through DCAF and PBS cluster  Dr. Kihyeon Cho’s Talk CDF CAF(Central Analysis Farm) at Feynman Center 306 Dual Nodes and 92 File Server (180 TB) Develop DCAF at off-sites (UCSD, KNU,..)

SKKU CDF Group International Workshop on HEP Data Grid Aug 23, 2003, KNU SKKU CDF group’s activity 1 Faculty and 4 Graduate students B Physics Analysis Currently ~400 GB and ~10 CPU’s are used at SKKU and Fermilab Computing Resources at SKKU File Server : ~ 4 TB (2 TB + 2 TB(plan)) Linux Farm : ~10 Nodes (2 + 8 (plan)) Desktops : 7 Linux PC’s Network : KOREN + KORNET

File Server International Workshop on HEP Data Grid Aug 23, 2003, KNU 2 TB File Server Optimization is in progress 12 Channel Raid (3WARE) 13* 180G 7200rpm HDD (IBM) 1Gbps to work nodes (planned) Another 2TB File Server planned

Linux Farm International Workshop on HEP Data Grid Aug 23, 2003, KNU 1 Batch Server + 1 Work node 7-9 Work nodes more in a month CPU : 1 Pentium 4 2.8C Hyper-Threading Technology Functions like dual CPU 1G DDR RAM/120G 7200rpm HDD 1Gbps bandwidth Plan to install PBS batch System Move to FBSNG system (mini DCAF) and SAM GRID

Construction Model International Workshop on HEP Data Grid Aug 23, 2003, KNU NF S Submit jobReturn result Batch job Result NFS User Linux desktop Batch Server Working nodes File Server LINUX FARM CONSTRUCTION MODEL

Network  Network Connectivity CHEP - SNU, CHEP - SKKU: ~ 40 Mbps CHEP - CDF : ~20 Mbps SKKU - CDF : < 2 Mbps : plan Direct Connection to KOREN Fermilab/CDF CHEP/KNU SNU SKKU KOREN 155Mbps APII International Workshop on HEP Data Grid Aug 23, 2003, KNU

Grid Activity International Workshop on HEP Data Grid Aug 23, 2003, KNU Globus 2.2 is installed in a test machine Configuration of CDF Analysis Software under Globus environment will be done  between SKKU and KNU DCAF In collaboration with SKKU Belle group, EUDG test bed will be constructed Plan to implement SKKU mini DCAF with SAM GRID in Linux farm

Summary International Workshop on HEP Data Grid Aug 23, 2003, KNU SKKU CDF group’s work on Grid is in a very early stage Construction of file server and linux farm is in progress Plan 1 Gbps between SKKU machines 100 Mbps to KNU/SNU and 20 Mbps to Fermilab/CDF 1 st : Configuration of CDF software in Globus nd : Build EUDG testbed at SKKU 3 rd : Construct mini DCAF through SAM GRID at SKKU