Presentation is loading. Please wait.

Presentation is loading. Please wait.

Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt.

Similar presentations


Presentation on theme: "Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt."— Presentation transcript:

1 Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt HENP Grids and Networks: Global Virtual Organizations HENP Grids and Networks: Global Virtual Organizations

2 Computing Challenges: Petabyes, Petaflops, Global VOs è Geographical dispersion: of people and resources è Complexity: the detector and the LHC environment è Scale: Tens of Petabytes per year of data 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Remote software development and physics analysis R&D: New Forms of Distributed Systems: Data Grids

3 The Large Hadron Collider (2006-) u The Next-generation Particle Collider è The largest superconductor installation in the world u Bunch-bunch collisions at 40 MHz, Each generating ~20 interactions è Only one in a trillion may lead to a major physics discovery u Real-time data filtering: Petabytes per second to Gigabytes per second u Accumulated data of many Petabytes/Year Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams

4 Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments

5 10 9 events/sec, selectivity: 1 in 10 13 (1 person in a thousand world populations) LHC: Higgs Decay into 4 muons (Tracker only); 1000X LEP Data Rate

6 Evidence for the Higgs at LEP at M~115 GeV The LEP Program Has Now Ended

7 LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5 Gbps 100 - 1000 Mbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbits/sec Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

8 Why Worldwide Computing? Regional Center Concept Advantages  Maximize total funding resources to meet the total computing and data handling needs  An N-Tiered Model: for fair-shared access for Physicists everywhere è Smaller size, greater control as N increases  Utilize all intellectual resources, & expertise in all time zones è Involving students and physicists at home universities and labs  Greater flexibility to pursue different physics interests, priorities, and resource allocation strategies by region è And/or by Common Interest: physics topics, subdetectors,…  Manage the System’s Complexity è Partitioning facility tasks, to manage & focus resources  Efficient use of network: higher throughput è Per Flow: Local > regional > national > international

9 HENP Related Data Grid Projects Projects è PPDG IUSADOE$2M1999-2001 è GriPhyNUSANSF$11.9M + $1.6M2000-2005 è EU DataGridEUEC€10M2001-2004 è PPDG II (CP)USADOE$9.5M2001-2004 è iVDGLUSANSF$13.7M + $2M2001-2006 è DataTAGEUEC€4M2002-2004 è GridPP UKPPARC>$15M2001-2004 è LCG (Ph1)CERN MS30 MCHF2002-2004 Many Other Projects of interest to HENP è Initiatives in US, UK, Italy, France, NL, Germany, Japan, … è US and EU networking initiatives: AMPATH, I2, DataTAG è US Distributed Terascale Facility: ($53M, 12 TeraFlops, 40 Gb/s network)

10 CMS Production: Event Simulation and Reconstruction “Grid-Enabled”Automated Imperial College UFL Fully operational Caltech PU No PU In progress Common Prod. tools (IMPALA)GDMPDigitizationSimulation Helsinki IN2P3 Wisconsin Bristol UCSD INFN (9) Moscow FNAL CERN Worldwide Production at 20 Sites

11 Next Generation Networks for Experiments: Goals and Needs u Providing rapid access to event samples and subsets from massive data stores è From ~400 Terabytes in 2001, ~Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. u Providing analyzed results with rapid turnaround, by coordinating and managing the LIMITED computing, data handling and NETWORK resources effectively u Enabling rapid access to the data and the collaboration è Across an ensemble of networks of varying capability u Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs è With reliable, quantifiable (monitored), high performance è For “Grid-enabled” event processing and data analysis, and collaboration

12 Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF ) US-CERN Link: 2 X 155 Mbps Now; Plans: 622 Mbps in April 2002; DataTAG 2.5 Gbps Research Link in Summer 2002; 10 Gbps Research Link in ~2003 or Early 2004 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Evolution typical of major HENP links 2001-2006

13 Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] Installed BW. Maximum Link Occupancy 50% Assumed See http://gate.hep.anl.gov/lprice/TAN

14 Links Required to US Labs and Transatlantic [*] [*] Maximum Link Occupancy 50% Assumed

15 Daily, Weekly, Monthly and Yearly Statistics on 155 Mbps US-CERN Link 20 - 100 Mbps Used Routinely in ‘01 BaBar: 600Mbps Throughput in ‘02 BW Upgrades Quickly Followed by Upgraded Production Use

16 RNP Brazil (to 20 Mbps) STARTAP/Abilene OC3 (to 80 Mbps) FIU Miami (to 80 Mbps)

17 Total U.S. Internet Traffic Source: Roberts et al., 2001 U.S. Internet Traffic 197019751980198519901995200020052010 Voice Crossover: August 2000 4X/Year 2.8X/Year 1Gbps 1Tbps 10Tbps 100Gbps 10Gbps 100Tbps 100Mbps 1Kbps 1Mbps 10Mbps 100Kbps 10Kbps 100 bps 1 Pbps 100 Pbps 10 Pbps 10 bps ARPA & NSF Data to 96 New Measurements Limit of same % GDP as Voice Projected at 4/Year

18 AMS-IX Internet Exchange Throughput Accelerating Growth in Europe (NL) Hourly Traffic 2/03/02 6.0 Gbps 4.0 Gbps 2.0 Gbps 0 Monthly Traffic 2X Growth from 8/00 - 3/01; 2X Growth from 8/01 - 12/01 ↓

19 ICFA SCIC December 2001: Backbone and Int’l Link Progress u Abilene upgrade from 2.5 to 10 Gbps; additional lambdas on demand planned for targeted applications u GEANT Pan-European backbone (http://www.dante.net/geant) now interconnects 31 countries; includes many trunks at OC48 and OC192 u CA*net4: Interconnect customer-owned dark fiber nets across Canada, starting in 2003 u GWIN (Germany): Connection to Abilene to 2 X OC48 Expected in 2002 u SuperSINET (Japan): Two OC12 Links, to Chicago and Seattle; Plan upgrade to 2 X OC48 Connection to US West Coast in 2003 u RENATER (France): Connected to GEANT at OC48; CERN link to OC12 u SuperJANET4 (UK): Mostly OC48 links, connected to academic MANs typically at OC48 (http://www.superjanet4.net) u US-CERN link (2 X OC3 Now) to OC12 this Spring; OC192 by ~2005; DataTAG research link OC48 Summer 2002; to OC192 in 2003-4. u SURFNet (NL) link to US at OC48 u ESnet 2 X OC12 backbone, with OC12 to HEP labs; Plans to connect to STARLIGHT using Gigabit Ethernet

20 Key Network Issues & Challenges u Net Infrastructure Requirements for High Throughput  Packet Loss must be ~Zero (well below 0.01%)  I.e. No “Commodity” networks  Need to track down uncongested packet loss  No Local infrastructure bottlenecks  Gigabit Ethernet “clear paths” between selected host pairs are needed now  To 10 Gbps Ethernet by ~2003 or 2004  TCP/IP stack configuration and tuning Absolutely Required  Large Windows; Possibly Multiple Streams  New Concepts of Fair Use Must then be Developed  Careful Router configuration; monitoring  Server and Client CPU, I/O and NIC throughput sufficient  End-to-end monitoring and tracking of performance  Close collaboration with local and “regional” network staffs TCP Does Not Scale to the 1-10 Gbps Range

21 201 Primary Participants All 50 States, D.C. and Puerto Rico 75 Partner Corporations and Non-Profits 14 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members 2.5 Gbps Backbone

22 Rapid Advances of Nat’l Backbones: Next Generation Abilene u Abilene partnership with Qwest extended through 2006 u Backbone to be upgraded to 10-Gbps in phases, to be Completed by October 2003 è GigaPoP Upgrade started in February 2002 u Capability for flexible provisioning in support of future experimentation in optical networking è In a multi- infrastructure

23 National R&E Network Example Germany: DFN TransAtlanticConnectivity Q1 2002 STM 4 STM 16 u 2 X OC12 Now: NY-Hamburg and NY-Frankfurt u ESNet peering at 34 Mbps u Upgrade to 2 X OC48 expected in Q1 2002 u Direct Peering to Abilene and Canarie expected u UCAID will add (?) another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN) u FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal è Speeds of 32 - 512 kbps u SILK Project (2002): NATO funding è Links to Caucasus and Central Asia (8 Countries) è Currently 64-512 kbps è Propose VSAT for 10-50 X BW: NATO + State Funding

24 National Research Networks in Japan u SuperSINET è Started operation January 4, 2002 è Support for 5 important areas: HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs è Provides 10 ’s: r 10 Gbps IP connection r Direct intersite GbE links r Some connections to 10 GbE in JFY2002 u HEPnet-J è Will be re-constructed with MPLS-VPN in SuperSINET u Proposal: Two TransPacific 2.5 Gbps Wavelengths, and KEK-CERN Grid Testbed Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitot. NII Chiba IP WDM path IP router OXC ISAS

25 TeraGrid (www.teragrid.org) NCSA, ANL, SDSC, Caltech NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Pasadena San Diego DTF Backplane(4x ): 40 Gbps) Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) 2 Solid lines in place and/or available in 2001 2 Dashed I-WIRE lines planned for Summer 2002 Source: Charlie Catlett, Argonne A Preview of the Grid Hierarchy and Networks of the LHC Era

26 US CMS TeraGrid Seamless Prototype u Caltech/Wisconsin Condor/NCSA Production u Simple Job Launch from Caltech è Authentication Using Globus Security Infrastructure (GSI) è Resources Identified Using Globus Information Infrastructure (GIS) u CMSIM Jobs (Batches of 100, 12-14 Hours, 100 GB Output) Sent to the Wisconsin Condor Flock Using Condor-G è Output Files Automatically Stored in NCSA Unitree (Gridftp) u ORCA Phase: Read-in and Process Jobs at NCSA è Output Files Automatically Stored in NCSA Unitree u Future: Multiple CMS Sites; Storage in Caltech HPSS Also, Using GDMP (With LBNL’s HRM). u Animated Flow Diagram of the DTF Prototype: http://cmsdoc.cern.ch/~wisniew/infrastructure.html

27 [*] See “Macroscopic Behavior of the TCP Congestion Avoidance Algorithm,” Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), 7/1997 Throughput quality improvements: BW TCP < MSS/(RTT*sqrt(loss)) [*] China Recent Improvement 80% Improvement/Year  Factor of 10 In 4 Years Eastern Europe Keeping Up

28 Internet2 HENP WG [*] u Mission: To help ensure that the required è National and international network infrastructures (end-to-end) è Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and è Collaborative systems u are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. è To carry out these developments in a way that is broadly applicable across many fields u Formed an Internet2 WG as a suitable framework: Oct. 26 2001 u [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana u Website: http://www.internet2.edu/henp; also see the Internet2 End-to-end Initiative: http://www.internet2.edu/e2e http://www.internet2.edu/henphttp://www.internet2.edu/e2ehttp://www.internet2.edu/henphttp://www.internet2.edu/e2e

29 True End to End Experience r User perception r Application r Operating system r Host IP stack r Host network card r Local Area Network r Campus backbone network r Campus link to regional network/GigaPoP r GigaPoP link to Internet2 national backbones r International connections EYEBALL APPLICATION STACK JACK NETWORK...

30 Networks, Grids and HENP u Grids are starting to change the way we do science and engineering è Successful use of Grids relies on high performance national and international networks u Next generation 10 Gbps network backbones are almost here: in the US, Europe and Japan è First stages arriving in 6-12 months u Major transoceanic links at 2.5 - 10 Gbps within 0-18 months u Network improvements are especially needed in South America; and some other world regions è BRAZIL, Chile; India, Pakistan, China; Africa and elsewhere u Removing regional, last mile bottlenecks and compromises in network quality are now all on the critical path u Getting high (reliable) Grid performance across networks means! è End-to-end monitoring; a coherent approach è Getting high performance (TCP) toolkits in users’ hands è Working in concert with Internet E2E, I2 HENP WG, DataTAG; Working with the Grid projects and GGF


Download ppt "Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt031902.ppt."

Similar presentations


Ads by Google