Download presentation
Presentation is loading. Please wait.
Published byScot Walsh Modified over 8 years ago
1
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae Yukio.Karita@KEK.jp on 4 October 2002 at Oita Korea-Kyushu Gigabit Network meeting
2
KEK High Energy Accelerator Research Organization KEK High Energy Accelerator Research Organization 高エネルギー加速器研究機構 高エネルギー加速器研究機構 located at Tsukuba located at Tsukuba - High Energy Physics Experiments: BELLE, K2K, …. - High Energy Physics Experiments: BELLE, K2K, …. are organized as international collaboration and are organized as international collaboration and are producing a huge amount of data. are producing a huge amount of data. - We have many Korean collaborators participating in these - We have many Korean collaborators participating in these experiments. experiments. - Network connectivity between KEK and Korean universities have - Network connectivity between KEK and Korean universities have been the biggest problem for us. Thanks to APAN, it was been the biggest problem for us. Thanks to APAN, it was improved very much, but is still very insufficient. improved very much, but is still very insufficient.
3
BELLE is transferring the data with SuperSINET/GbE’s Data Storage: 630 TB LAN PC Farm P-III x 240 (8,200 SPECint 95) Disk: 14 TB Compute Server (5,000 SPECint95) Internal Servers Data rate : 20 MB/sec Data Volume : 50 TB/year or more SuperSINET / GbE to Universities Started in 1999
4
SuperSINET Cable Map Sapporo Sendai 1 Sendai 2 Sendai 3 KEK Tsukuba TITWaseda Tokyo 3 Tokyo 2 Tokyo 1 NAOISAS Okazaki NIG NIFS Nagoya Kanazawa Kyoto 1 Kyoto 2 Osaka Kobe Hiroshima NII 2 NII 1 Fukuoka Tokyo OXC Osaka OXC Nagoya OXC
5
OXC WDM R R OXC R R WDM GbE 10Gbps WDM SuperSINET Typical Circuit Configuration Site 1 Site 2 Site 3 Site 4 Site 5 Hub AHub B
7
In Japan, the data are transported to major Japanese universities with SuperSINET/GbE (dedicated end-end GbE’s for HEP) and are analyzed there. ---> Distributed Data Analysis (or Data Grid) ---> Distributed Data Analysis (or Data Grid) Our hope is to extend this scheme to Korean universities. - SuperSINET/GbE to Kyushu Univ - SuperSINET/GbE to Kyushu Univ - GbE over Genkai/Hyeonhae - GbE over Genkai/Hyeonhae - end-end GbE in KOREN - end-end GbE in KOREN If this approach is difficult, the second solution is to use the IP connectivity through SuperSINET-Genkai/Hyeonhae-KOREN.
8
ICFA and International Networking u ICFA = International Committee for Future Accelerators u ICFA Statement on “Communications in Int’l HEP Collaborations” “Communications in Int’l HEP Collaborations” of October 17, 1996 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html of October 17, 1996 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should: HEP Collaborations should: Review their operating methods to ensure they are fully adapted to remote participation Strive to provide the necessary communications facilities and adequate international bandwidth”
9
ICFA Standing Committee on Interregional Connectivity (SCIC) u Created by ICFA in July 1998 in Vancouver u CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe è As part of the process of developing these recommendations, the committee should Monitor traffic Keep track of technology developments Periodically review forecasts of future bandwidth needs, and Provide early warning of potential problems u Create subcommittees when necessary to meet the charge u The chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors
10
ICFA-SCIC Core Membership u Representatives from major HEP laboratories: Manuel Delfino (CERN) (to W. Von Rueden) Michael Ernst (DESY) Matthias Kasemann (FNAL) Yukio Karita (KEK) Richard Mount (SLAC) (to W. Von Rueden) Michael Ernst (DESY) Matthias Kasemann (FNAL) Yukio Karita (KEK) Richard Mount (SLAC) u User Representatives Richard Hughes-Jones (UK) Harvey Newman(USA) Dean Karlen (Canada) u For Russia: Slava Ilyin (MSU) u ECFA representatives: Frederico Ruggieri (INFN Frascati), Denis Linglin (IN2P3, Lyon) u ACFA representatives: Rongsheng Xu (IHEP Beijing) HwanBae Park (Korea University) u For South America: Sergio F. Novaes (University de S.Paulo)
11
LHC Computing Model Data Grid Hierarchy (Ca. 2005) Tier 1 Tier2 Center Online System Offline Farm, CERN Computer Ctr ~25 TIPS FNAL Center IN2P3 Center Tokyo/KEK Center INFN Center Institute Institute ~0.25TIPS Workstations ~100 MBytes/sec ~0.6-2.5 Gbps 100 - 1000 Mbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbits/sec Tier2 Center ~0.6-2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1
12
HEP Major Links: BW Roadmap in Gbps; Shown at ICHEP2002
13
Assisted by Level3 (OC192) and Cisco (10GbE and 16X1GbE) iGrid2002: OC192+OC48 Trial Sep 2002 Short Term Donation from Level 3
14
US-CERN DataTAG Link Tests with Grid TCP 3 Streams; 3 GbE Ports (Syskonnect) 1-3 Streams; 1-3 GbE Ports (Syskonnect)
15
HEP Lambda Grids: Fibers for Physics u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HEP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) Transaction Size (TB) Net Throughput (Gbps) 1 10 1 10 10 100 10 100 100 1000 (Capacity of Fiber Today) 100 1000 (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within ~6-10 years would enable “Petascale Grids with Terabyte transactions” within this decade, as required to fully realize the discovery potential of major HEP programs, as well as other data-intensive fields.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.