Harvey B Newman Harvey B Newman FAST Meeting, Caltech FAST Meeting, Caltech July 1, 2002 HENP.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Symposium on Knowledge Environments for Science: HENP Collaboration & Internet2 Douglas Van Houweling President & CEO, Internet2/UCAID November 26, 2002.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Present and Future Networks an HENP Perspective Present and Future Networks an HENP Perspective Harvey B. Newman, Caltech HENP WG Meeting Internet2 Headquarters,
Challenges to address in the next future Apr 3, 2006 HEPiX Spring Meeting 2006 Enzo Valente, GARR and INFN.
Harvey B Newman, Caltech Harvey B Newman, Caltech Optical Networks and Grids Meeting the Advanced Network Needs of Science May 7, 2002 Optical Networks.
Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt ppt.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
High Energy Physics: Networks & Grids Systems for Global Science High Energy Physics: Networks & Grids Systems for Global Science Harvey B. Newman Harvey.
CA*net 4 International Grid Testbed Tel:
Update on CA*net 4 Network
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
GRID COMPUTING AND THE GROWTH OF THE INTERNET ROBERT B. COHEN, PH.D. COHEN COMMUNICATIONS GROUP
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
Harvey B Newman, Caltech Harvey B Newman, Caltech Optical Networks and Grids Meeting the Advanced Network Needs of Science May 7, 2002 Optical Networks.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
Abilene update IBM Internet2 Day July 26, 2001 Steve Corbató Director of Backbone Network Infrastructure.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
…building the next IT revolution From Web to Grid…
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
CERN-USA connectivity update DataTAG project
Wide Area Networking at SLAC, Feb ‘03
The EU DataTAG Project Olivier H. Martin CERN - IT Division
Next Generation Abilene
Wide-Area Networking at SLAC
Presentation transcript:

Harvey B Newman Harvey B Newman FAST Meeting, Caltech FAST Meeting, Caltech July 1, HENP Grids and Networks Global Virtual Organizations HENP Grids and Networks Global Virtual Organizations

Computing Challenges: Petabyes, Petaflops, Global VOs è Geographical dispersion: of people and resources è Complexity: the detector and the LHC environment è Scale: Tens of Petabytes per year of data Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids

Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = Bytes) (2007) (~2012 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = Bytes) (2007) (~2012 ?) for the LHC Experiments

10 9 events/sec, selectivity: 1 in (1 person in a thousand world populations) LHC: Higgs Decay into 4 muons (Tracker only); 1000X LEP Data Rate

LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~ MBytes/sec Gbps 0.1–10 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~ Gbps Tier2 Center ~ Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

Emerging Data Grid User Communities u Grid Physics Network (GriPhyN) è ATLAS, CMS, LIGO, SDSS u Particle Physics Data Grid (PPDG) Int’l Virtual Data Grid Lab (iVDGL) u NSF Network for Earthquake Engineering Simulation (NEES) è Integrated instrumentation, collaboration, simulation u Access Grid; VRVS: supporting group-based collaboration And u Genomics, Proteomics,... u The Earth System Grid and EOSDIS u Federating Brain Data u Computed MicroTomography … è Virtual Observatories

HENP Related Data Grid Projects u Projects è PPDG IUSADOE$2M è GriPhyNUSANSF $11.9M + $1.6M è EU DataGridEUEC€10M è PPDG II (CP) USADOE$9.5M è iVDGLUSANSF$13.7M + $2M è DataTAGEUEC€4M è GridPP UKPPARC>$15M è LCG (Ph1)CERN MS30 MCHF u Many Other Projects of interest to HENP è Initiatives in US, UK, Italy, France, NL, Germany, Japan, … è Networking initiatives: DataTAG, AMPATH, CALREN-XD… è US Distributed Terascale Facility: ($53M, 12 TeraFlops, 40 Gb/s network)

Daily, Weekly, Monthly and Yearly Statistics on 155 Mbps US-CERN Link Mbps Used Routinely in ’01 BaBar: 600 Mbps Throughput in ‘02 BW Upgrades Quickly Followed by Upgraded Production Use

Tier A "Physicists have indeed foreseen to test the GRID principles starting first from the Computing Centres in Lyon and Stanford (California). A first step towards the ubiquity of the GRID." Pierre Le Hir Le Monde 12 april 2001 CERN-US Line + Abilene Renater + ESnet 3/2002 D. Linglin: LCG Wkshop Two centers are trying to work as one: -Data not duplicated -Internationalization -transparent access, etc…

RNP Brazil (to 20 Mbps) FIU Miami/So. America (to 80 Mbps)

Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] Installed BW. Maximum Link Occupancy 50% Assumed See

Links Required to US Labs and Transatlantic [*] Links Required to US Labs and Transatlantic [*] [*] Maximum Link Occupancy 50% Assumed; OC3=155 Mbps; OC12=622 Mbps; OC48=2.5 Gbps; OC192=10 Gbps Note: New ESNet Upgrade Plan

MONARC: CMS Analysis Process Hierarchy of Processes (Experiment, Analysis Groups,Individuals) Selection Iterative selection Once per month ~20 Groups’ Activity (10 9  10 7 events) Trigger based and Physics based refinements 25 SI95sec/event ~20 jobs per month 25 SI95sec/event ~20 jobs per month Analysis Different Physics cuts & MC comparison ~Once per day ~25 Individual per Group Activity (10 6 –10 7 events) Algorithms applied to data to get results 10 SI95sec/event ~500 jobs per day 10 SI95sec/event ~500 jobs per day Monte Carlo 5000 SI95sec/event RAW Data Reconstruction Re-processing 3 Times per year Experiment- Wide Activity (10 9 events) New detector calibrations Or understanding 3000 SI95sec/event 1 job year 3000 SI95sec/event 1 job year 3000 SI95sec/event 3 jobs per year 3000 SI95sec/event 3 jobs per year

Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report ) Tier1  Tier0 Data Flow for Analysis Gbps 2) Tier2  Tier0 Data Flow for Analysis Gbps 3) Interactive Collaborative Sessions (30 Peak) Gbps 4) Remote Interactive Sessions (30 Flows Peak) Gbps 5) Individual (Tier3 or Tier4) data transfers 0.8 Gbps Limit to 10 Flows of 5 Mbytes/sec each TOTAL Per Tier0 - Tier1 Link Gbps NOTE: è Adopted by the LHC Experiments; given in the Steering Committee Report on LHC Computing: “ Gbps per experiment” è Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link è Report also discussed the effects of higher bandwidths r For example all-optical 10 Gbps Ethernet + WAN by

Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001 uDoes Not Include more recent ATLAS Data Estimates è270 Hz at Instead of 100Hz è400 Hz at Instead of 100Hz è2 MB/Event Instead of 1 MB/Event ? uDoes Not Allow Fast Download to Tier3+4 of “Small” Object Collections èExample: Download 10 7 Events of AODs (10 4 Bytes)  100 Gbytes; At 5 Mbytes/sec per person (above) that’s 6 Hours ! uThis is a still a rough, bottoms-up, static, and hence Conservative Model. èA Dynamic distributed DB or “Grid” system with Caching, Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth èDoes Not Include “Virtual Data” operations; Derived Data Copies; Data-description overheads èFurther MONARC Model Studies are Needed

* Also see and the Internet2 E2E Initiative: Maximum Throughput on Transatlantic Links (155 Mbps) u 8/10/ Mbps reached with 30 Streams: SLAC-IN2P3 u 9/1/ Mbps in One Stream: CIT-CERN u 11/5/ Mbps in One Stream (modified kernel): CIT-CERN u 1/09/ Mbps for One stream shared on Mbps links u 3/11/ Mbps Disk-to-Disk with One Stream on 155 Mbps link (Chicago-CERN) u 5/20/ Mbps SLAC-Manchester on OC12 with ~100 Streams u 6/1/ Mbps Chicago-CERN One Stream on OC12 (mod. Kernel)

Some Recent Events: Reported 6/1/02 to ICFA/SCIC u Progress in High Throughput: 0.1 to 1 Gbps è Land Speed Record: SURFNet – Alaska (IPv6) (0.4+ Gbps) è SLAC – Manchester (Les C. and Richard H-J) (0.4+ Gbps) è Tsunami (Indiana) (0.8 Gbps UDP) è Tokyo – KEK (0.5 – 0.9 Gbps) u Progress in Pre-Production and Production Networking è 10 Mbytes/sec FNAL-CERN (Michael Ernst) è 15 Mbytes/sec disk-to-disk Chicago-CERN (Sylvain Ravot) u KPNQwest files for Chapter 11; Stops network yesterday. è Near Term Pricing of Competitor (DT) ok. è Unknown impact on prices and future planning in the medium and longer term

Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF )  US-CERN Link: 622 Mbps this month  DataTAG 2.5 Gbps Research Link in Summer 2002  10 Gbps Research Link by Approx. Mid-2003 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links

Total U.S. Internet Traffic Source: Roberts et al., 2001 U.S. Internet Traffic Voice Crossover: August X/Year 2.8X/Year 1Gbps 1Tbps 10Tbps 100Gbps 10Gbps 100Tbps 100Mbps 1Kbps 1Mbps 10Mbps 100Kbps 10Kbps 100 bps 1 Pbps 100 Pbps 10 Pbps 10 bps ARPA & NSF Data to 96 New Measurements Limit of same % GDP as Voice Projected at 3/Year

Internet Growth Rate Fluctuates Over Time U.S. Internet Edge Traffic Growth Rate 6 Month Lagging Measure Jan 00 Apr 00 Jul 00 Oct 00 Jan 01 Apr 01 Jul 01 Oct 01 Jan 02 Growth Rate per Year Average: 3.0/year 10/00–4/01 Growth Reported 3.6/year 10/00–4/01 Growth Reported 4.0/year Source: Roberts et al., 2002

AMS-IX Internet Exchange Throughput Accelerating Growth in Europe (NL) Monthly Traffic 2X Growth from 8/00 - 3/01; 2X Growth from 8/ /01 ↓ 2.0 Gbps 4.0 Gbps 6.0 Gbps Hourly Traffic 3/22/02

ICFA SCIC Meeting March 9 at CERN: Updates from Members u Abilene Upgrade from 2.5 to 10 Gbps è Additional scheduled lambdas planned for targeted for targeted applications: Pacific and National Light Rail u US-CERN è Upgrade On Track: to 622 Mbps in July; Setup and Testing Done in STARLIGHT è 2.5G Research Lambda by this Summer: STARLIGHT-CERN è 2.5G Triangle between STARLIGHT (US), SURFNet (NL), CERN u SLAC + IN2P3 (BaBar) è Getting 100 Mbps over 155 Mbps CERN-US Link è 50 Mbps Over RENATER 155 Mbps Link, Limited by ESnet è 600 Mbps Throughput is BaBar Target for this Year u FNAL è Expect ESnet Upgrade to 622 Mbps this Month è Plans for dark fiber to STARLIGHT underway, could be done in ~4 Months; Railway or Electric Co. provider

ICFA SCIC: A&R Backbone and International Link Progress u GEANT Pan-European Backbone ( è Now interconnects 31 countries è Includes many trunks at 2.5 and 10 Gbps u UK è 2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene u SuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength è Upgrade to Two 0.6 Gbps Links, to Chicago and Seattle è Plan upgrade to 2 X 2.5 Gbps Connection to US West Coast by 2003 u CA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, starting July 2002 è “Lambda-Grids” by ~ u GWIN (Germany): Connection to Abilene Upgraded to 2 X 2.5 Gbps early in 2002 u Russia è Start 10 Mbps link to CERN and ~90 Mbps to US Now

210 Primary Participants All 50 States, D.C. and Puerto Rico 80 Partner Corporations and Non-Profits 22 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members 2.5  10 Gbps Backbone Caltech Connection with GbE to New Backbone

National R&E Network Example Germany: DFN TransAtlanticConnectivity Q STM 4 STM 16 u 2 X OC12 Now: NY-Hamburg and NY-Frankfurt u ESNet peering at 34 Mbps u Upgrade to 2 X OC48 expected in Q u Direct Peering to Abilene and Canarie expected u UCAID will add another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN) u FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal è Speeds of kbps u SILK Project (2002): NATO funding è Links to Caucasus and Central Asia (8 Countries) è Currently kbps è Propose VSAT for X BW: NATO + State Funding

National Research Networks in Japan u SuperSINET è Started operation January 4, 2002 è Support for 5 important areas: HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs è Provides 10 ’s: ­ 10 Gbps IP connection ­ 7 Direct intersite GbE links ­ Some connections to 10 GbE in JFY2002 u HEPnet-J è Will be re-constructed with MPLS-VPN in SuperSINET u Proposal: Two TransPacific 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003 Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitot. NII Chiba IP WDM path IP router OXC ISAS

NL SURFnet GENEVA UK SuperJANET4 ABILEN E ESNET CALRE N It GARR-B GEANT NewYork Fr Renater STAR-TAP STARLIGHT DataTAG Project u EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners u Main Aims: è Ensure maximum interoperability between US and EU Grid Projects è Transatlantic Testbed for advanced network research u 2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003) Wave Triangle

TeraGrid ( NCSA, ANL, SDSC, Caltech NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Caltech San Diego DTF Backplane: 4 X 10 Gbps Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Source: Charlie Catlett, Argonne A Preview of the Grid Hierarchy and Networks of the LHC Era Idea to extend the TeraGrid to CERN

CA ONI, CALREN-XD + Pacific Light Rail Backbones (Proposed) Also: LA-Caltech Metro Fiber; National Light Rail

Key Network Issues & Challenges u Net Infrastructure Requirements for High Throughput  Packet Loss must be ~Zero (at and below )  I.e. No “Commodity” networks  Need to track down uncongested packet loss  No Local infrastructure bottlenecks  Multiple Gigabit Ethernet “clear paths” between selected host pairs are needed now  To 10 Gbps Ethernet paths by 2003 or 2004  TCP/IP stack configuration and tuning Absolutely Required  Large Windows; Possibly Multiple Streams  New Concepts of Fair Use Must then be Developed  Careful Router, Server, Client, Interface configuration  Sufficient CPU, I/O and NIC throughput sufficient  End-to-end monitoring and tracking of performance  Close collaboration with local and “regional” network staffs TCP Does Not Scale to the 1-10 Gbps Range

A Short List: Revolutions in Information Technology (2002-7) u Managed Global Data Grids (As Above) u Scalable Data-Intensive Metro and Long Haul Network Technologies è DWDM: 10 Gbps then 40 Gbps per ; 1 to 10 Terabits/sec per fiber è 10 Gigabit Ethernet (See 10GbE / 10 Gbps LAN/WAN integration è Metro Buildout and Optical Cross Connects è Dynamic Provisioning  Dynamic Path Building ­ “Lambda Grids” u Defeating the “Last Mile” Problem (Wireless; or Ethernet in the First Mile) è 3G and 4G Wireless Broadband (from ca. 2003); and/or Fixed Wireless “Hotspots” è Fiber to the Home è Community-Owned Networks

A Short List: Coming Revolutions in Information Technology u Storage Virtualization è Grid-enabled Storage Resource Middleware (SRM) è iSCSI (Internet Small Computer Storage Interface); Integrated with 10 GbE  Global File Systems u Internet Information Software Technologies è Global Information “Broadcast” Architecture ­ E.g the Multipoint Information Distribution Protocol è Programmable Coordinated Agent Architectures ­ E.g. Mobile Agent Reactive Spaces (MARS) by Cabri et al., University of Modena u The “Data Grid” - Human Interface è Interactive monitoring and control of Grid resources ­ By authorized groups and individuals ­ By Autonomous Agents

HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps

One Long Range Scenario (Ca ) HENP As a Driver of Optical Networks Petascale Grids with TB Transactions u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then u Transaction Size (TB) Net Throughput (Gbps) u 1 10 u u (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within 5-10 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

Internet2 HENP WG [*] u Mission: To help ensure that the required è National and international network infrastructures (end-to-end) è Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and è Collaborative systems u are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. è To carry out these developments in a way that is broadly applicable across many fields u Formed an Internet2 WG as a suitable framework: Oct u [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana) u Website: also see the Internet2 End-to-end Initiative:

True End to End Experience r User perception r Application r Operating system r Host IP stack r Host network card r Local Area Network r Campus backbone network r Campus link to regional network/GigaPoP r GigaPoP link to Internet2 national backbones r International connections EYEBALL APPLICATION STACK JACK NETWORK...

HENP Scenario Limitations: Technologies and Costs u Router Technology and Costs (Ports and Backplane) u Computer CPU, Disk and I/O Channel Speeds to Send and Receive Data u Link Costs: Unless Dark Fiber (?) u MultiGigabit Transmission Protocols End-to-End u “100 GbE” Ethernet (or something else) by ~2006: for LANs to match WAN speeds

[*] See “Macroscopic Behavior of the TCP Congestion Avoidance Algorithm,” Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), 7/1997 Throughput quality improvements: BW TCP < MSS/(RTT*sqrt(loss)) [*] China Improves But Far Behind 80% Improvement/Year  Factor of 10 In 4 Years Eastern Europe Far Behind

11900 Hosts; 6620 Registered Users in 61 Countries 43 (7 I2) Reflectors Annual Growth 2 to 3X

Networks, Grids and HENP u Next generation 10 Gbps network backbones are almost here: in the US, Europe and Japan è First stages arriving, starting now u Major transoceanic links at Gbps in u Network improvements are especially needed in Southeast Europe, So. America; and some other regions: è Romania, Brazil; India, Pakistan, China; Africa u Removing regional, last mile bottlenecks and compromises in network quality are now All on the critical path u Getting high (reliable; Grid) application performance across networks means! è End-to-end monitoring; a coherent approach è Getting high performance (TCP) toolkits in users’ hands è Working in concert with AMPATH, Internet E2E, I2 HENP WG, DataTAG; the Grid projects and the GGF