A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

Physics with SAM-Grid Stefan Stonjek University of Oxford 6 th GridPP Meeting 30 th January 2003 Coseners House.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group.
YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
YuChul Yang Oct KPS 2006 가을 EXCO, 대구 The Current Status of KorCAF and CDF Grid 양유철, 장성현, 미안 사비르 아메드, 칸 아딜, 모하메드 아즈말, 공대정, 김지은,
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
11 March 2004Getting Ready for the Grid SAM: Tevatron Experiments Using the Grid CDF and D0 Need the Grid –Requirements, the CAF and SAM –Grid from the.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)
Two Types of Remote CDF Shifts (Remote Control Room) for CDF Experiment 김현우 CDF Korea Workshop, Daejeon APRIL
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
A New CDF Model for Data Movement Based on SRM Manoj K. Jha INFN- Bologna 27 th Feb., 2009 University of Birmingham, U.K.
4 March 2004GridPP 9th Collaboration Meeting SAMGrid:JIM and CDF Development CDF Accepts the Need for the Grid –Requirements How to Meet the Need –Status.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
CDF computing in the GRID framework in Santander
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Data Management with SAM at DØ The 2 nd International Workshop on HEP Data Grid Kyunpook National University Daegu, Korea August 22-23, 2003 Lee Lueking.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
A New CDF Model for Data Movement Based on SRM Manoj K. Jha INFN- Bologna Presently at Fermilab 21 st April, 2009 Post Doctoral Interview University of.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 By: Solomon Mikael (UMBC) Advisors: Elena Vataga (UNM) & Pavel Murat (FNAL) Development of Farm Monitoring & Remote Concatenation for CDFII Production.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
SAM at CCIN2P3 configuration issues
Kihyeon Cho Supercomputing Center KISTI
Particle Physics at KISTI
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Lee Lueking D0RACE January 17, 2002
Data Processing for CDF Computing
Proposal for a DØ Remote Analysis Model (DØRAM)
Kihyeon Cho* (High Energy Physics Team, KISTI)
Presentation transcript:

A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop on HEP Data Grid November 8-9, 2002

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 2  CHEP and Fermilab CDF  CDF Data/Analysis Flow  CDF Grid RoadMap (bottom –up method) □Step 1. CAF (Central Analysis Farm) □Step 2. DCAF (DeCentralized Analysis Farm) □Step 3. Grid  KCAF (DeCenteralized Analysis Farm in Korea)  Future Plan Contents

CHEP and Fermilab CDF Grid Korea CHEP Fermilab in USA KCAF (DCAF in Korea)

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 4 Server s … IBM 8271 PCs L3 Switch (3 군 ) Gigabit Ethernet C6509 KOREN Gigabit Switch (CHEP) … Servers Gigabit Ethernet FNAL KEK APII 45Mbps APII 2 X 1Gbps CERN TEIN 10Mbps Network between CHEP and overseas

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 5 Overview of Fermilab Main Injector and Recycler  p source Booster CDF D0 Fixed Target Experiment

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS l Totals ä 11 countries ä 52 institutions ä 525 physicists North AmericaEurope Asia 3 Natl. Labs 25 Universities 2 Universities 1 Research Lab 6 Universities 1 University 4 Universities 2 Research Labs 1 University 1 Research Lab 4 Universities 1 University KOREA Center for High Energy Physics: Kyungpook National University Seoul National University SungKyunKwan University CDF Collaboration

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 7 Production Farm

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 8 CDF Computer Resource CAF Stage 1 Current configuration: Stage 1 –63 dual worker nodes. –Roughly 160 GHz total, compared to 38 GHz for fcdfsgi2, and 8 GHz for cdfsga (Run I). –7 fileservers for physics (~50% full), 5 for DH, 2 for development, 2 for user scratch. FCC (Feynman Computer Center)

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 9 CAF design considerations: –Submit jobs from 'anywhere' –Job output can be: sent directly to desktop or stored on CAF for later retrieval or –input to subsequent job CAF Design Integer

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 10 Requirement in 2005 –200 simultaneous users will analyze 10 7 events of secondary data set in a day. –Need ~700 TB of disk and ~5THz of CPU by end of FY’05 Current CAF (Central Analysis Farm) is not enough: –Limited resources and spaces at FCC –In case of network problems at FCC, it is dangerous.  We need DCAF (DeCentralized Analysis Farm) and Grid DCAF (DeCentralized Analysis Farm) –Users around regional area and/or around the world –Korea, Toronto, Karlsruhe, …. –Let us call DCAF in Korea as “KCAF” DCAF/Grid at CDF Experiment

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 11 CAF in USA Center for HEP (CHEP) Kyungpook National U. Daegu, Korea KCAF (DeCentralized Analysis Farm in Korea) APII (Asia Pacific Information Infrastructure) DCAF in Karlsruhe DCAF in Toronto DCAF for CDF Experiment

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 12 Proposed Schemes for Final Goal DCAF in Korea (KCAF) CAF or

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 13 Step 1. To make MC production farm using KCAF –First, we start to construct 12 CPU test bed for KCAF. –After policy inside of CHEP (another test bed for EDG and iVDGL), we will decide how many CPUs for actual MC generation farm will be used among this year’s planed 140 CPUs. Step 2. To handle real data –To extend the KCAF to the real data handling system using SAM (Sequential data Access via Meta-data), Gridftp, etc after settling down real data handling system. Step 3. Final goal of CDF Grid –A gridification for KCAF related with EDG, iVDGL and CDF Grid The Road Map for Goal

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 14 1.Software CDF Software FBSNG for batch job 2.Kerborse Client : login from Korea to Fermilab Server : login from Fermilab to Korea KDC : Client & Server Trust 3.Data Handling System Glasgow, Italy and Yale use Data Handling System. SAM station needs to be installed on 1 PC work node. 4.OS for Fermilab Red Hat Linux 7.3 with Kernel Fundamental license problem for DB The Road Map in Detail

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 15 1TB buffer (FCC) CAFGUI ICAFGUI SAM Station (KCAF) User Desktop Remote Desktop KCAF Head Node rcp, GridFTP bbftp cp KCAF Cluster FCC (CAF) CHEP (KCAF) ICAF FTP server CDFen (Raw Data) dCache rcp fcdfsam Smaster FSS, Stager stager Technical Request STKen Calibration Data dCache rcp A Design of KCAF for CDF (Fermilab) Grid

KIHYEON CHO CENTER FOR HIGH ENERGY PHYSICS 16 Around 140 CPU linux clusters will be constructed at the end of this year at CHEP. We are constructing KCAF for CDF at CHEP, Kyungpook National University. Contribute 1 T byte hard disk (now) + 4 T byte hard disk (December) to CDF for the network buffer between CHEP and Fermilab. The DCAF demonstration will be shown at SC2002 on November 16, Future Plan (This year)