YuChul Yang Oct. 20.2006 KPS 2006 가을 EXCO, 대구 The Current Status of KorCAF and CDF Grid 양유철, 장성현, 미안 사비르 아메드, 칸 아딜, 모하메드 아즈말, 공대정, 김지은,

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

Physics with SAM-Grid Stefan Stonjek University of Oxford 6 th GridPP Meeting 30 th January 2003 Coseners House.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Rod Walker IC 13th March 2002 SAM-Grid Middleware  SAM.  JIM.  RunJob.  Conclusions. - Rod Walker,ICL.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
JIM Deployment for the CDF Experiment M. Burgon-Lyon 1, A. Baranowski 2, V. Bartsch 3,S. Belforte 4, G. Garzoglio 2, R. Herber 2, R. Illingworth 2, R.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
LcgCAF:CDF submission portal to LCG Federica Fanzago for CDF-Italian Computing Group Gabriele Compostella, Francesco Delli Paoli, Donatella Lucchesi, Daniel.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
CDF Grid Status Stefan Stonjek 05-Jul th GridPP meeting / Durham.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Deploying and Operating the SAM-Grid: lesson learned Gabriele Garzoglio for the SAM-Grid Team Sep 28, 2004.
Grid Computing I CONDOR.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Distribution After Release Tool Natalia Ratnikova.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
CDF Grid at KISTI 정민호, 조기현 *, 김현우, 김동희 1, 양유철 1, 서준석 1, 공대정 1, 김지은 1, 장성현 1, 칸 아딜 1, 김수봉 2, 이재승 2, 이영장 2, 문창성 2, 정지은 2, 유인태 3, 임 규빈 3, 주경광 4, 김현수 5, 오영도.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Dzero MC production on LCG How to live in two worlds (SAM and LCG)
Remote Control Room and SAM DH Shifts at KISTI for CDF Experiment 김현우, 조기현, 정민호 (KISTI), 김동희, 양유철, 서준석, 공대정, 김지은, 장성현, 칸 아딜 ( 경북대 ), 김수봉, 이재승, 이영장, 문창성,
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
May 12, 2005Batch Workshop HEPiX Karlsruhe 1 Preparing for the Grid— Changes in Batch Systems at Fermilab HEPiX Batch System Workshop.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
March 1, CDF Grid Daejung Kong and Kihyeon Cho* (High Energy Physics Team, KISTI) * Corresponding author June 25, 2009 Korea CDF Meeting, SNU, Korea.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March,
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
CDF Monte Carlo Production on LCG GRID via LcgCAF Authors: Gabriele Compostella Donatella Lucchesi Simone Pagan Griso Igor SFiligoi 3 rd IEEE International.
Honolulu - Oct 31st, 2007 Using Glideins to Maximize Scientific Output 1 IEEE NSS 2007 Making Science in the Grid World - Using Glideins to Maximize Scientific.
LcgCAF:CDF submission portal to LCG
Data Processing for CDF Computing
gLite The EGEE Middleware Distribution
Presentation transcript:

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 The Current Status of KorCAF and CDF Grid 양유철, 장성현, 미안 사비르 아메드, 칸 아딜, 모하메드 아즈말, 공대정, 김지은, 서준석, 김동희 ( 경북대학교, 물리학과 ) 이영장, 정지은, 문창성, 김현수, 전은주, 주경광, 김수봉 ( 서울대학교, 물리학과 ) 고정환, 이재승, 유인태 ( 성균관대학교, 물리학과 ) 조기현 ( 슈퍼컴퓨팅 센터, KISTI)

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Introduction to CDF Computing  Developed in to respond to experiments greatly increased need for computational and data handling resources to deal with RunII  One of the first large-scale cluster approaches to user computing for general analysis.  Greatly increased CPU power & data to physicists.  CDF Grid via CAF, DCAF, SAM and SAMGrid ☞ DCAF(DeCentralized Analysis Farm) ☞ SAM (Sequential Access through Metadata) – Real data Handling System ☞ SAMGrid – combination of SAM and JIM (Job Information Management) system

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Outline CAF Central Analysis Farm : A large central computing resource based on Linux cluster farms with a simple job management scheme at Fermilab. DCAF Decentralized CDF Analysis Farm : We extended the above model, including its command line interface and GUI, to manage and work with remote resources Grid We are now in the process of adapting and converting out work flow to the Grid

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Environment on CAF  All basic CDF software pre-installed on CAF  Authentication via Kerberos ☞ Jobs are run via mapped accounts with authentication of actual user through special principal ☞ Database, data handling remote usres ID passed on through lookup of actual user via special principal  User’s analysis environment comes over in tarball - no need to pre-register or submit only certain jobs.  Job returns results to user via secure ftp/rcp controlled by user script and principal

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 In 2006, about 50% of analysis farm outside of FNAL Distributed clusters in Korea, Taiwan, Japan, Italy, Germany, Spain, UK, USA and Canada

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Current DCAF approach  Cluster technology (CAF = “Central analysis farm”) extended to remote site (DCAFs = Decentralized CDF analysis Farm)  Multiple batch systems supported : converting from FBSNG system to Condor on all DCAFs  SAM data handling system required for offsite DCAFs

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 (2006/Aug) Current CDF Dedicated Resources

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 TYPECPURAMHDDNO head Node cluster46.knu.ac.kr AMD MP2000 * 22G80G1 sam station cluster67.knu.ac.kr Pentium 4 2.4G1G80G1 submission node cluster52.knu.ac.kr Pentium 4 2.4G1G80G1 worker node cluster39~cluster73(21) cluster102~cluster114(13) Cluster122~cluster130(9) Cluster137~cluster139(3) (updated 2006) AMD MP2000 * 22G80G4 AMD MP2200 * 21G80G2 AMD MP2800 * 22G80G11 AMD MP2800 * 22G250G2 Pentium 4 2.4G1G80G15 Xeon 3.G * 22G80G9 Xeon 3.G * 22G80G3 Total 81 CPU (179.9GHz)79G4260G49 Detail of KorCAF resources

YuChul Yang Oct KPS 2006 가을 EXCO, 대구  Storage status CPURAMHDDNO Current0.6TB Opteron dual2G4TB1 Zeon dual1G1TB1 Total5.6TB2 Working on CondorCAF batch system  cdfsoft  Installed products : , , 4.8.4, 4.9.1, 4.9.1hpt3, 5.2.0, 5.3.0, 5.3.1, 5.3.3, 5.3.3_nt, 5.3.4, development  Installed binary products: , 5.3.1, 5.3.3, 5.3.3_nt, 5.3.4

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 CAF gui & Monitoring System Select farm Process type Submit status User script, I/O file location Data access

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Functionality for User (KorCAF) FeatureStatus Self-contained user interfaceYes Runs arbitrary user codeYes Automatic identity managementYes Network delivery of resultsYes Input and output data handlingYes Batch system priority management Yes Automatic choice of farmNot yet Negotiation of resourcesNot yet Runs on arbitrary grid resourcesNot yet Grid

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Total CDF Computing Requirements Input ConditionsResulting Requirements Fiscal Year Int LEvtsPeak rateAnaRecoDiskTape I/OTape Vol fb -1 x 10 9 MB/sHzTHz PBGB/sPB /04.9 Analysis CPU, disk, tape needs scale with number of events. FNAL portion of analysis CPU assumed at roughly 50% beyond 2005.

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Movement to Grid  It’s the world wide trend for HEP experiment.  Need to take advantage of global innovations and resources.  CDF still has a lot of data to be analyzed. USE Grid Cannot continue to expand dedicate resource

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Activities for CDF Grid  Testing various approaches to using Grid resources (Grid3/OSG and LCG)  Adapt the CAF infrastructure to run on top of the Grid using Condor glide-ins (GlideCAF)  Use direct submission via CAF interface to OSG and LCG  Use SAMGrid/JIM sendboxing as an alternate way to deliver experiment + user software  Combine DCAFs with Grid resources

YuChul Yang Oct KPS 2006 가을 EXCO, 대구 Conclusions  CDF has successfully deployed a global computing environment (DCAFs) for user analysis.  A large portion (50%) of the total CPU resources of the experiment are now provided by offsite through a combination of DCAFs and other clusters.  And KorCAF (DCAF in Korea) working on Condor batch system.  Active work is in progress to build bridges to true Grid methods & protocols provide a path to the future.