DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

Physics with SAM-Grid Stefan Stonjek University of Oxford 6 th GridPP Meeting 30 th January 2003 Coseners House.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Robert Fourer, Jun Ma, Kipp Martin Copyright 2006 An Enterprise Computational System Built on the Optimization Services (OS) Framework and Standards Jun.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
LcgCAF:CDF submission portal to LCG Federica Fanzago for CDF-Italian Computing Group Gabriele Compostella, Francesco Delli Paoli, Donatella Lucchesi, Daniel.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
YuChul Yang Oct KPS 2006 가을 EXCO, 대구 The Current Status of KorCAF and CDF Grid 양유철, 장성현, 미안 사비르 아메드, 칸 아딜, 모하메드 아즈말, 공대정, 김지은,
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
11 March 2004Getting Ready for the Grid SAM: Tevatron Experiments Using the Grid CDF and D0 Need the Grid –Requirements, the CAF and SAM –Grid from the.
CDF Grid Status Stefan Stonjek 05-Jul th GridPP meeting / Durham.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
CDF Grid at KISTI 정민호, 조기현 *, 김현우, 김동희 1, 양유철 1, 서준석 1, 공대정 1, 김지은 1, 장성현 1, 칸 아딜 1, 김수봉 2, 이재승 2, 이영장 2, 문창성 2, 정지은 2, 유인태 3, 임 규빈 3, 주경광 4, 김현수 5, 오영도.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Two Types of Remote CDF Shifts (Remote Control Room) for CDF Experiment 김현우 CDF Korea Workshop, Daejeon APRIL
Remote Control Room and SAM DH Shifts at KISTI for CDF Experiment 김현우, 조기현, 정민호 (KISTI), 김동희, 양유철, 서준석, 공대정, 김지은, 장성현, 칸 아딜 ( 경북대 ), 김수봉, 이재승, 이영장, 문창성,
A New CDF Model for Data Movement Based on SRM Manoj K. Jha INFN- Bologna 27 th Feb., 2009 University of Birmingham, U.K.
4 March 2004GridPP 9th Collaboration Meeting SAMGrid:JIM and CDF Development CDF Accepts the Need for the Grid –Requirements How to Meet the Need –Status.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Condor Week 2004 The use of Condor at the CDF Analysis Farm Presented by Sfiligoi Igor on behalf of the CAF group.
19 February 2004SAMGrid Project Review SAMGrid: Future Plans CDF Accepts the Need for the Grid –Requirements D0 Relies on the Grid –Requirements How to.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March,
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
1 By: Solomon Mikael (UMBC) Advisors: Elena Vataga (UNM) & Pavel Murat (FNAL) Development of Farm Monitoring & Remote Concatenation for CDFII Production.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
ALICE & Clouds GDB Meeting 15/01/2013
SAM at CCIN2P3 configuration issues
Data Processing for CDF Computing
Proposal for a DØ Remote Analysis Model (DØRAM)
Kihyeon Cho* (High Energy Physics Team, KISTI)
The DZero/PPDG D0/PPDG mission is to enable fully distributed computing for the experiment, by enhancing SAM as the distributed data handling system of.
Presentation transcript:

DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer, CHANG Sunghyeon, YANG Yuchul, KIM Donghee, SON Dongchul, CHO Ilsung1, LEE Jae Seung 1, YU Intae 1, LEE Jik 2, KIM Subong 2, LEE Jysoo 3 Center for High Energy Physics, Kyungpook National University. 1 Sungkyunkwan University. 2 Seoul National University. 3 KISTI. (On behalf of the HEP Data Grid Working Group in Korea)

Contents DCAF and SAM Grid Scheme of KorCAF and CAF A Design of DCAF in Korea for CDF (Fermilab) Grid System Structure of DCAF(1) System Structure of DCAF(2) FBSNG CAF GUI (Job Submission Tool) FBSWWW (Monitoring Tool) DCAF System Summary and Future Plan

DCAF and SAM Grid CAF (Central Analysis Farm) - Feynman Computer Center (FCC) at Fermilab for CDF Experiment Dual CPU clusters, 180TByte Disk Server - for CDF Experiment DCAF (DeCentralized Analysis Farm) - Limited resources and spaces at FCC - At Run IIb, data size is 7 times more than now  DCAF - Users around regional area and/or around the world SAM (Sequential data Access via Meta-data) - basically a distributed data transfer and management service - To handle real data at Fermilab for DCAF around world - Gridification of DCAF via SAM Grid Ref.

Scheme of KorCAF and CAF DCAF in Korea(KorCAF) nkchep3.fnal.gov fcdfhead1.fnal.gov Cluster46.knu.ac.kr SAMGrid CAF

A Design of DCAF in Korea for CDF (Fermilab) Grid 1TB buffer (FCC) CAFGUI ICAFGUI SAM Station (KCAF) User Desktop Remote Desktop KCAF Head Node rcp, GridFTP bbftp cp KCAF Cluster FCC (CAF) CHEP (KCAF) ICAF FTP server CDFen (Raw Data) dCache rcp fcdfsam Smaster FSS, Stager stager Technical Request STKen Calibrati on Data dCache rcp

Head Node cluster46.knu.ac.kr Work Nodes cluster46.knu.ac.kr ~ Job submission Node nkchep3.fnal.gov System Structure of DCAF(1) CAF GUIDFBSWWW

System Structure of DCAF(2) SAM Grid

FBSNG Parallel Batch Jobs FBSNG job is a collection of sections. Each section is an array of identical, potentially co-operating, processes running on one or more farm nodes (worker nodes). Flexible Farm Configuration FBSNG allows unlimited number of resources of several types to be defined for the farm Dynamic Farm Configuration FBSNG Administrator can modify farm configuration using interactive configuration utility or GUI without re-starting FBSNG. Load Balancing FBSNG has fully customizable Scheduler FBSNG provides Application Programming Interface (API). Kerberos Support

CAF GUI (Job Submission Tool) Analysis farm CAF(Fermilab),korcaf(KNU) Process Type short,medium,long … Initial Command Original Directory Output File Location rcp/scp format that includes the Unix ID

FBSWWW (Monitoring Tool) Queues Jobs Nodes Process Types

DCAF System DCAF in KNU ObjectItemQuantity CPU AMD Athlon(tm) MP RAM2G3 HDD8G3 UPS40 KW1 Network Switch24x100Mbps + 2 x 1 Gbps 1 ※ Total KorCAF system will be 25 machines (33 CPUs)

Summary and Future Plan We have designed and succeeded in constructing KorCAF (DCAF in Korea). This is the first DCAF at CDF experiment that is outside of Fermilab. Currently we are making stability of the KorCAF (network, work nodes and storage) and setting for users for all CDF collaborators. Total KorCAF system will be 25 machines (33 CPUs) For the first step, to use the KorCAF as MC production farm. Plan for SC2003 Demo on KorCAF with KISTI.