Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

SINET updates Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems January 24, 2005.
25 August, 2003NORDUnet Networking Conference1 Deployment of 10Gb/s IP/Optical backbone in SuperSINET for Research, Education and Next Generation GRID.
Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Present and Future Networks an HENP Perspective Present and Future Networks an HENP Perspective Harvey B. Newman, Caltech HENP WG Meeting Internet2 Headquarters,
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
Slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting.
Challenges to address in the next future Apr 3, 2006 HEPiX Spring Meeting 2006 Enzo Valente, GARR and INFN.
Digital Divide in HEP in and to Japan ICFA Workshop on HEP networking at Daegu on May 23-27, 2005.
Computing Coordination in Japan Takashi Sasaki Computing Research Center KEK, Inter-University Research Institute Corporation High Energy Accelerator Research.
1 Networking for LHC and HEP L. E. Price Argonne National Laboratory DOE/NSF Review of LHC Computing BNL, November 15, Thanks to much input from.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
ICFA Report to Lepton-Photon 2015 July 2014 to August 2015 J. Mnich (DESY) August 21, 2015.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
APAN-JP Report August 28, 2002 Shigeki Goto. APAN Topology.
CA*net 4 International Grid Testbed Tel:
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
Copyright AARNet Massive Data Transfers George McLaughlin Mark Prior AARNet.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Status of APAN International Workshop of HEP Data Grid Nov 9, 2002 Yong-Jin Park APAN, Director of Secretariat/ Hanyang University.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
Randall Sobie University of Victoria IHEPCCC - HEPiX April International HEP Computing Coordination Committee Randall Sobie.
Hyunhae/Genkai Project: New Frontier Gigabit Network Project between Korea and Kyushu APAN Phuket Meeting Yong-Jin Park/ Hanyang University,
April 15, ICFA_SCIC meeting Matthias Kasemann, FNAL1 ICFA Standing Committee on Interregional Connectivity 1. Meeting at Fermilab April
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
APAN Backbone Committee Meeting 2004 Intra-Continental Update of APII IPv6 R & D Testbed Project Kiyoshi IGARASHI Communications Research Laboratory, Incooporated.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
APAN HEP Workshop Introduction Tuesday 21 January, 2003 APAN Fukuoka Conference
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Collaborative Research Projects in Australia: High Energy Physicists Dr. Greg Wickham (AARNet) Dr. Glenn Moloney (University of Melbourne) Global Collaborations.
Performance measurement of transferring files on the federated SRB
Belle II Physics Analysis Center at TIFR
Dagmar Adamova, NPI AS CR Prague/Rez
Randall Sobie University of Victoria IHEPCCC Meeting
Report to ICFA August 10, 1999 Matthias Kasemann, FNAL
Introduction to HEPiX Helge Meinhard, CERN-IT
LHC Collisions.
Jan. 24th, 2003 Kento Aida (TITECH) Sissades Tongsima (NECTEC)
Presentation transcript:

Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network meeting

KEK High Energy Accelerator Research Organization KEK High Energy Accelerator Research Organization 高エネルギー加速器研究機構 高エネルギー加速器研究機構 located at Tsukuba located at Tsukuba - High Energy Physics Experiments: BELLE, K2K, …. - High Energy Physics Experiments: BELLE, K2K, …. are organized as international collaboration and are organized as international collaboration and are producing a huge amount of data. are producing a huge amount of data. - We have many Korean collaborators participating in these - We have many Korean collaborators participating in these experiments. experiments. - Network connectivity between KEK and Korean universities have - Network connectivity between KEK and Korean universities have been the biggest problem for us. Thanks to APAN, it was been the biggest problem for us. Thanks to APAN, it was improved very much, but is still very insufficient. improved very much, but is still very insufficient.

BELLE is transferring the data with SuperSINET/GbE’s Data Storage: 630 TB LAN PC Farm P-III x 240 (8,200 SPECint 95) Disk: 14 TB Compute Server (5,000 SPECint95) Internal Servers Data rate : 20 MB/sec Data Volume : 50 TB/year or more SuperSINET / GbE to Universities Started in 1999

SuperSINET Cable Map Sapporo Sendai 1 Sendai 2 Sendai 3 KEK Tsukuba TITWaseda Tokyo 3 Tokyo 2 Tokyo 1 NAOISAS Okazaki NIG NIFS Nagoya Kanazawa Kyoto 1 Kyoto 2 Osaka Kobe Hiroshima NII 2 NII 1 Fukuoka Tokyo OXC Osaka OXC Nagoya OXC

OXC WDM R R OXC R R WDM GbE 10Gbps WDM SuperSINET Typical Circuit Configuration Site 1 Site 2 Site 3 Site 4 Site 5 Hub AHub B

In Japan, the data are transported to major Japanese universities with SuperSINET/GbE (dedicated end-end GbE’s for HEP) and are analyzed there. ---> Distributed Data Analysis (or Data Grid) ---> Distributed Data Analysis (or Data Grid) Our hope is to extend this scheme to Korean universities. - SuperSINET/GbE to Kyushu Univ - SuperSINET/GbE to Kyushu Univ - GbE over Genkai/Hyeonhae - GbE over Genkai/Hyeonhae - end-end GbE in KOREN - end-end GbE in KOREN If this approach is difficult, the second solution is to use the IP connectivity through SuperSINET-Genkai/Hyeonhae-KOREN.

ICFA and International Networking u ICFA = International Committee for Future Accelerators u ICFA Statement on “Communications in Int’l HEP Collaborations” “Communications in Int’l HEP Collaborations” of October 17, 1996 See of October 17, 1996 See “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should: HEP Collaborations should:  Review their operating methods to ensure they are fully adapted to remote participation  Strive to provide the necessary communications facilities and adequate international bandwidth”

ICFA Standing Committee on Interregional Connectivity (SCIC) u Created by ICFA in July 1998 in Vancouver u CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe è As part of the process of developing these recommendations, the committee should  Monitor traffic  Keep track of technology developments  Periodically review forecasts of future bandwidth needs, and  Provide early warning of potential problems u Create subcommittees when necessary to meet the charge u The chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors

ICFA-SCIC Core Membership u Representatives from major HEP laboratories: Manuel Delfino (CERN) (to W. Von Rueden) Michael Ernst (DESY) Matthias Kasemann (FNAL) Yukio Karita (KEK) Richard Mount (SLAC) (to W. Von Rueden) Michael Ernst (DESY) Matthias Kasemann (FNAL) Yukio Karita (KEK) Richard Mount (SLAC) u User Representatives Richard Hughes-Jones (UK) Harvey Newman(USA) Dean Karlen (Canada) u For Russia: Slava Ilyin (MSU) u ECFA representatives: Frederico Ruggieri (INFN Frascati), Denis Linglin (IN2P3, Lyon) u ACFA representatives: Rongsheng Xu (IHEP Beijing) HwanBae Park (Korea University) u For South America: Sergio F. Novaes (University de S.Paulo)

LHC Computing Model Data Grid Hierarchy (Ca. 2005) Tier 1 Tier2 Center Online System Offline Farm, CERN Computer Ctr ~25 TIPS FNAL Center IN2P3 Center Tokyo/KEK Center INFN Center Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec ~ Gbps Mbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbits/sec Tier2 Center ~ Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

HEP Major Links: BW Roadmap in Gbps; Shown at ICHEP2002

Assisted by Level3 (OC192) and Cisco (10GbE and 16X1GbE) iGrid2002: OC192+OC48 Trial Sep 2002 Short Term Donation from Level 3

US-CERN DataTAG Link Tests with Grid TCP 3 Streams; 3 GbE Ports (Syskonnect) 1-3 Streams; 1-3 GbE Ports (Syskonnect)

HEP Lambda Grids: Fibers for Physics u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HEP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) Transaction Size (TB) Net Throughput (Gbps) (Capacity of Fiber Today) (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within ~6-10 years would enable “Petascale Grids with Terabyte transactions” within this decade, as required to fully realize the discovery potential of major HEP programs, as well as other data-intensive fields.