Download presentation
Presentation is loading. Please wait.
1
Harvey B Newman Harvey B Newman FAST Meeting, Caltech FAST Meeting, Caltech July 1, 2002 http://l3www.cern.ch/~newman/HENPGridsNets_FAST070202.ppt HENP Grids and Networks Global Virtual Organizations HENP Grids and Networks Global Virtual Organizations
2
Computing Challenges: Petabyes, Petaflops, Global VOs è Geographical dispersion: of people and resources è Complexity: the detector and the LHC environment è Scale: Tens of Petabytes per year of data 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids
3
Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments
4
10 9 events/sec, selectivity: 1 in 10 13 (1 person in a thousand world populations) LHC: Higgs Decay into 4 muons (Tracker only); 1000X LEP Data Rate
5
LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5-10 Gbps 0.1–10 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5-10 Gbps Tier2 Center ~2.5-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1
6
Emerging Data Grid User Communities u Grid Physics Network (GriPhyN) è ATLAS, CMS, LIGO, SDSS u Particle Physics Data Grid (PPDG) Int’l Virtual Data Grid Lab (iVDGL) u NSF Network for Earthquake Engineering Simulation (NEES) è Integrated instrumentation, collaboration, simulation u Access Grid; VRVS: supporting group-based collaboration And u Genomics, Proteomics,... u The Earth System Grid and EOSDIS u Federating Brain Data u Computed MicroTomography … è Virtual Observatories
7
HENP Related Data Grid Projects u Projects è PPDG IUSADOE$2M 1999-2001 è GriPhyNUSANSF $11.9M + $1.6M 2000-2005 è EU DataGridEUEC€10M 2001-2004 è PPDG II (CP) USADOE$9.5M 2001-2004 è iVDGLUSANSF$13.7M + $2M 2001-2006 è DataTAGEUEC€4M 2002-2004 è GridPP UKPPARC>$15M 2001-2004 è LCG (Ph1)CERN MS30 MCHF 2002-2004 u Many Other Projects of interest to HENP è Initiatives in US, UK, Italy, France, NL, Germany, Japan, … è Networking initiatives: DataTAG, AMPATH, CALREN-XD… è US Distributed Terascale Facility: ($53M, 12 TeraFlops, 40 Gb/s network)
8
Daily, Weekly, Monthly and Yearly Statistics on 155 Mbps US-CERN Link 20 - 100 Mbps Used Routinely in ’01 BaBar: 600 Mbps Throughput in ‘02 BW Upgrades Quickly Followed by Upgraded Production Use
9
Tier A "Physicists have indeed foreseen to test the GRID principles starting first from the Computing Centres in Lyon and Stanford (California). A first step towards the ubiquity of the GRID." Pierre Le Hir Le Monde 12 april 2001 CERN-US Line + Abilene Renater + ESnet 3/2002 D. Linglin: LCG Wkshop Two centers are trying to work as one: -Data not duplicated -Internationalization -transparent access, etc…
10
RNP Brazil (to 20 Mbps) FIU Miami/So. America (to 80 Mbps)
11
Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] Installed BW. Maximum Link Occupancy 50% Assumed See http://gate.hep.anl.gov/lprice/TAN
12
Links Required to US Labs and Transatlantic [*] Links Required to US Labs and Transatlantic [*] [*] Maximum Link Occupancy 50% Assumed; OC3=155 Mbps; OC12=622 Mbps; OC48=2.5 Gbps; OC192=10 Gbps Note: New ESNet Upgrade Plan
13
MONARC: CMS Analysis Process Hierarchy of Processes (Experiment, Analysis Groups,Individuals) Selection Iterative selection Once per month ~20 Groups’ Activity (10 9 10 7 events) Trigger based and Physics based refinements 25 SI95sec/event ~20 jobs per month 25 SI95sec/event ~20 jobs per month Analysis Different Physics cuts & MC comparison ~Once per day ~25 Individual per Group Activity (10 6 –10 7 events) Algorithms applied to data to get results 10 SI95sec/event ~500 jobs per day 10 SI95sec/event ~500 jobs per day Monte Carlo 5000 SI95sec/event RAW Data Reconstruction Re-processing 3 Times per year Experiment- Wide Activity (10 9 events) New detector calibrations Or understanding 3000 SI95sec/event 1 job year 3000 SI95sec/event 1 job year 3000 SI95sec/event 3 jobs per year 3000 SI95sec/event 3 jobs per year
14
Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 1) Tier1 Tier0 Data Flow for Analysis0.5 - 1.0 Gbps 2) Tier2 Tier0 Data Flow for Analysis0.2 - 0.5 Gbps 3) Interactive Collaborative Sessions (30 Peak) 0.1 - 0.3 Gbps 4) Remote Interactive Sessions (30 Flows Peak) 0.1 - 0.2 Gbps 5) Individual (Tier3 or Tier4) data transfers 0.8 Gbps Limit to 10 Flows of 5 Mbytes/sec each TOTAL Per Tier0 - Tier1 Link1.7 - 2.8 Gbps NOTE: è Adopted by the LHC Experiments; given in the Steering Committee Report on LHC Computing: “1.5 - 3 Gbps per experiment” è Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link è Report also discussed the effects of higher bandwidths r For example all-optical 10 Gbps Ethernet + WAN by 2002-3
15
Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001 uDoes Not Include more recent ATLAS Data Estimates è270 Hz at 10 33 Instead of 100Hz è400 Hz at 10 34 Instead of 100Hz è2 MB/Event Instead of 1 MB/Event ? uDoes Not Allow Fast Download to Tier3+4 of “Small” Object Collections èExample: Download 10 7 Events of AODs (10 4 Bytes) 100 Gbytes; At 5 Mbytes/sec per person (above) that’s 6 Hours ! uThis is a still a rough, bottoms-up, static, and hence Conservative Model. èA Dynamic distributed DB or “Grid” system with Caching, Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth èDoes Not Include “Virtual Data” operations; Derived Data Copies; Data-description overheads èFurther MONARC Model Studies are Needed
16
* Also see http://www-iepm.slac.stanford.edu/monitoring/bulk/; and the Internet2 E2E Initiative: http://www.internet2.edu/e2e Maximum Throughput on Transatlantic Links (155 Mbps) u 8/10/01 105 Mbps reached with 30 Streams: SLAC-IN2P3 u 9/1/01 102 Mbps in One Stream: CIT-CERN u 11/5/01 125 Mbps in One Stream (modified kernel): CIT-CERN u 1/09/02 190 Mbps for One stream shared on 2 155 Mbps links u 3/11/02 120 Mbps Disk-to-Disk with One Stream on 155 Mbps link (Chicago-CERN) u 5/20/02 450 Mbps SLAC-Manchester on OC12 with ~100 Streams u 6/1/02 290 Mbps Chicago-CERN One Stream on OC12 (mod. Kernel)
17
Some Recent Events: Reported 6/1/02 to ICFA/SCIC u Progress in High Throughput: 0.1 to 1 Gbps è Land Speed Record: SURFNet – Alaska (IPv6) (0.4+ Gbps) è SLAC – Manchester (Les C. and Richard H-J) (0.4+ Gbps) è Tsunami (Indiana) (0.8 Gbps UDP) è Tokyo – KEK (0.5 – 0.9 Gbps) u Progress in Pre-Production and Production Networking è 10 Mbytes/sec FNAL-CERN (Michael Ernst) è 15 Mbytes/sec disk-to-disk Chicago-CERN (Sylvain Ravot) u KPNQwest files for Chapter 11; Stops network yesterday. è Near Term Pricing of Competitor (DT) ok. è Unknown impact on prices and future planning in the medium and longer term
18
Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF ) US-CERN Link: 622 Mbps this month DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-2003 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links 2001-2006
19
Total U.S. Internet Traffic Source: Roberts et al., 2001 U.S. Internet Traffic 197019751980198519901995200020052010 Voice Crossover: August 2000 4X/Year 2.8X/Year 1Gbps 1Tbps 10Tbps 100Gbps 10Gbps 100Tbps 100Mbps 1Kbps 1Mbps 10Mbps 100Kbps 10Kbps 100 bps 1 Pbps 100 Pbps 10 Pbps 10 bps ARPA & NSF Data to 96 New Measurements Limit of same % GDP as Voice Projected at 3/Year
20
Internet Growth Rate Fluctuates Over Time U.S. Internet Edge Traffic Growth Rate 6 Month Lagging Measure 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 Jan 00 Apr 00 Jul 00 Oct 00 Jan 01 Apr 01 Jul 01 Oct 01 Jan 02 Growth Rate per Year Average: 3.0/year 10/00–4/01 Growth Reported 3.6/year 10/00–4/01 Growth Reported 4.0/year Source: Roberts et al., 2002
21
AMS-IX Internet Exchange Throughput Accelerating Growth in Europe (NL) Monthly Traffic 2X Growth from 8/00 - 3/01; 2X Growth from 8/01 - 12/01 ↓ 2.0 Gbps 4.0 Gbps 6.0 Gbps Hourly Traffic 3/22/02
22
ICFA SCIC Meeting March 9 at CERN: Updates from Members u Abilene Upgrade from 2.5 to 10 Gbps è Additional scheduled lambdas planned for targeted for targeted applications: Pacific and National Light Rail u US-CERN è Upgrade On Track: to 622 Mbps in July; Setup and Testing Done in STARLIGHT è 2.5G Research Lambda by this Summer: STARLIGHT-CERN è 2.5G Triangle between STARLIGHT (US), SURFNet (NL), CERN u SLAC + IN2P3 (BaBar) è Getting 100 Mbps over 155 Mbps CERN-US Link è 50 Mbps Over RENATER 155 Mbps Link, Limited by ESnet è 600 Mbps Throughput is BaBar Target for this Year u FNAL è Expect ESnet Upgrade to 622 Mbps this Month è Plans for dark fiber to STARLIGHT underway, could be done in ~4 Months; Railway or Electric Co. provider
23
ICFA SCIC: A&R Backbone and International Link Progress u GEANT Pan-European Backbone (http://www.dante.net/geant) http://www.dante.net/geant è Now interconnects 31 countries è Includes many trunks at 2.5 and 10 Gbps u UK è 2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene u SuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength è Upgrade to Two 0.6 Gbps Links, to Chicago and Seattle è Plan upgrade to 2 X 2.5 Gbps Connection to US West Coast by 2003 u CA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, starting July 2002 è “Lambda-Grids” by ~2004-5 u GWIN (Germany): Connection to Abilene Upgraded to 2 X 2.5 Gbps early in 2002 u Russia è Start 10 Mbps link to CERN and ~90 Mbps to US Now
24
210 Primary Participants All 50 States, D.C. and Puerto Rico 80 Partner Corporations and Non-Profits 22 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members 2.5 10 Gbps Backbone Caltech Connection with GbE to New Backbone
25
National R&E Network Example Germany: DFN TransAtlanticConnectivity Q1 2002 STM 4 STM 16 u 2 X OC12 Now: NY-Hamburg and NY-Frankfurt u ESNet peering at 34 Mbps u Upgrade to 2 X OC48 expected in Q1 2002 u Direct Peering to Abilene and Canarie expected u UCAID will add another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN) u FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal è Speeds of 32 - 512 kbps u SILK Project (2002): NATO funding è Links to Caucasus and Central Asia (8 Countries) è Currently 64-512 kbps è Propose VSAT for 10-50 X BW: NATO + State Funding
26
National Research Networks in Japan u SuperSINET è Started operation January 4, 2002 è Support for 5 important areas: HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs è Provides 10 ’s: 10 Gbps IP connection 7 Direct intersite GbE links Some connections to 10 GbE in JFY2002 u HEPnet-J è Will be re-constructed with MPLS-VPN in SuperSINET u Proposal: Two TransPacific 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003 Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitot. NII Chiba IP WDM path IP router OXC ISAS
27
NL SURFnet GENEVA UK SuperJANET4 ABILEN E ESNET CALRE N It GARR-B GEANT NewYork Fr Renater STAR-TAP STARLIGHT DataTAG Project u EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners u Main Aims: è Ensure maximum interoperability between US and EU Grid Projects è Transatlantic Testbed for advanced network research u 2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003) Wave Triangle
28
TeraGrid (www.teragrid.org) NCSA, ANL, SDSC, Caltech NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Caltech San Diego DTF Backplane: 4 X 10 Gbps Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Source: Charlie Catlett, Argonne A Preview of the Grid Hierarchy and Networks of the LHC Era Idea to extend the TeraGrid to CERN
29
CA ONI, CALREN-XD + Pacific Light Rail Backbones (Proposed) Also: LA-Caltech Metro Fiber; National Light Rail
30
Key Network Issues & Challenges u Net Infrastructure Requirements for High Throughput Packet Loss must be ~Zero (at and below 10 -6 ) I.e. No “Commodity” networks Need to track down uncongested packet loss No Local infrastructure bottlenecks Multiple Gigabit Ethernet “clear paths” between selected host pairs are needed now To 10 Gbps Ethernet paths by 2003 or 2004 TCP/IP stack configuration and tuning Absolutely Required Large Windows; Possibly Multiple Streams New Concepts of Fair Use Must then be Developed Careful Router, Server, Client, Interface configuration Sufficient CPU, I/O and NIC throughput sufficient End-to-end monitoring and tracking of performance Close collaboration with local and “regional” network staffs TCP Does Not Scale to the 1-10 Gbps Range
31
A Short List: Revolutions in Information Technology (2002-7) u Managed Global Data Grids (As Above) u Scalable Data-Intensive Metro and Long Haul Network Technologies è DWDM: 10 Gbps then 40 Gbps per ; 1 to 10 Terabits/sec per fiber è 10 Gigabit Ethernet (See www.10gea.org) 10GbE / 10 Gbps LAN/WAN integration è Metro Buildout and Optical Cross Connects è Dynamic Provisioning Dynamic Path Building “Lambda Grids” u Defeating the “Last Mile” Problem (Wireless; or Ethernet in the First Mile) è 3G and 4G Wireless Broadband (from ca. 2003); and/or Fixed Wireless “Hotspots” è Fiber to the Home è Community-Owned Networks
32
A Short List: Coming Revolutions in Information Technology u Storage Virtualization è Grid-enabled Storage Resource Middleware (SRM) è iSCSI (Internet Small Computer Storage Interface); Integrated with 10 GbE Global File Systems u Internet Information Software Technologies è Global Information “Broadcast” Architecture E.g the Multipoint Information Distribution Protocol (Tie.Liao@inria.fr) è Programmable Coordinated Agent Architectures E.g. Mobile Agent Reactive Spaces (MARS) by Cabri et al., University of Modena u The “Data Grid” - Human Interface è Interactive monitoring and control of Grid resources By authorized groups and individuals By Autonomous Agents
33
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
34
One Long Range Scenario (Ca. 2008- 12) HENP As a Driver of Optical Networks Petascale Grids with TB Transactions u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then u Transaction Size (TB) Net Throughput (Gbps) u 1 10 u 10 100 u 100 1000 (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within 5-10 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.
35
Internet2 HENP WG [*] u Mission: To help ensure that the required è National and international network infrastructures (end-to-end) è Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and è Collaborative systems u are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. è To carry out these developments in a way that is broadly applicable across many fields u Formed an Internet2 WG as a suitable framework: Oct. 26 2001 u [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana) u Website: http://www.internet2.edu/henp; also see the Internet2 End-to-end Initiative: http://www.internet2.edu/e2e http://www.internet2.edu/henphttp://www.internet2.edu/e2ehttp://www.internet2.edu/henphttp://www.internet2.edu/e2e
36
True End to End Experience r User perception r Application r Operating system r Host IP stack r Host network card r Local Area Network r Campus backbone network r Campus link to regional network/GigaPoP r GigaPoP link to Internet2 national backbones r International connections EYEBALL APPLICATION STACK JACK NETWORK...
37
HENP Scenario Limitations: Technologies and Costs u Router Technology and Costs (Ports and Backplane) u Computer CPU, Disk and I/O Channel Speeds to Send and Receive Data u Link Costs: Unless Dark Fiber (?) u MultiGigabit Transmission Protocols End-to-End u “100 GbE” Ethernet (or something else) by ~2006: for LANs to match WAN speeds
38
[*] See “Macroscopic Behavior of the TCP Congestion Avoidance Algorithm,” Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), 7/1997 Throughput quality improvements: BW TCP < MSS/(RTT*sqrt(loss)) [*] China Improves But Far Behind 80% Improvement/Year Factor of 10 In 4 Years Eastern Europe Far Behind
39
11900 Hosts; 6620 Registered Users in 61 Countries 43 (7 I2) Reflectors Annual Growth 2 to 3X
40
Networks, Grids and HENP u Next generation 10 Gbps network backbones are almost here: in the US, Europe and Japan è First stages arriving, starting now u Major transoceanic links at 2.5 - 10 Gbps in 2002-3 u Network improvements are especially needed in Southeast Europe, So. America; and some other regions: è Romania, Brazil; India, Pakistan, China; Africa u Removing regional, last mile bottlenecks and compromises in network quality are now All on the critical path u Getting high (reliable; Grid) application performance across networks means! è End-to-end monitoring; a coherent approach è Getting high performance (TCP) toolkits in users’ hands è Working in concert with AMPATH, Internet E2E, I2 HENP WG, DataTAG; the Grid projects and the GGF
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.