Download presentation
Presentation is loading. Please wait.
Published byPhilip Conley Modified over 9 years ago
1
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2
2
What is Cyberinfrastructure (CI)? A strategic orientation supported by NSF Calls for large-scale public investment to encourage the evolution of widely distributed computing via the telecommunications network Goal is to deploy the combined capacity of multiple sites to support the advance of current research, initially in science and engineering
3
The Distributed CI Computer Instrumentation Security Control Data Generation Computation Analysis Simulation Program Security Management Security and Access Authentication Access Control Authorization Researcher Control Program Viewing Security 3D Imaging Display and Visualization. Display Tools Security Data Input Collab Tools Publishing Human Support Help Desk Policy and Funding Resource Providers Funding Agencies Campuses Search Data Sets Storage Security Retrieval Input Schema Metadata Data Directories Ontologies Archive Education And Outreach Network Training
4
The Network is the Backplane for the Distributed CI Computer Instrumentation Security Control Data Generation Computation Analysis Simulation Program Security Management Security and Access Authentication Access Control Authorization Researcher Control Program Viewing Security 3D Imaging Display and Visualization. Display Tools Security Data Input Collab Tools Publishing Human Support Help Desk Policy and Funding Resource Providers Funding Agencies Campuses Search Data Sets Storage Security Retrieval Input Schema Metadata Data Directories Ontologies Archive Education And Outreach Network Training
5
Challenge and Opportunity Challenge: The R&E community thinks of CI primarily in terms of building distributed computing clusters Opportunity: The network is a key component of CI Internet2 is leading the development of solutions for the network component of CI
6
LHC epitomizes the CI Challenge
7
Current Situation Large Hadron Collider (LHC) at CERN will go operational in 2008 Over 68 U.S. Universities and National Laboratories are poised to receive data More than 1500 scientists are waiting for this data Are campus, regional, and national networks ready for the task? 7
8
CERN Tier 0Raw Data FNAL BNL Shared Data Storage and Reduction Tier 1 (12 orgs) US Tier 2 (15 orgs) CMS (7) Atlas (6-7) US Tier 3 (68 orgs) US Tier 4 (1500 US scientists) Scientists Request Data Provides Data to Tier 3 Scientists Analyze Data LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Local Infrastructure 8
9
CERN Tier 0 to Tier1: Requires 10-40 Gbps Tier 1 to Tier 2: Requires 10-20 Gbps LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Tier 1 or 2 to Tier 3: Estimate: Requires 1.6 Gbps per transfer (2 TB's in 3 hours) Peak Flow Network Requirements Local Infrastructure 9
10
Science Network Requirements Aggregation Summary (slide courtesy of ESNet) Science Drivers Science Areas / Facilities End2End Reliability Connectivity2006 End2End Band width 2010 End2End Band width Traffic Characteristics Network Services Advanced Light Source - DOE sites US Universities Industry 1 TB/day 300 Mbps 5 TB/day 1.5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid Bioinformatics- DOE sites US Universities 625 Mbps 12.5 Gbps in two years 250 Gbps Bulk data Remote control Point-to-multipoint Guaranteed bandwidth High-speed multicast Chemistry / Combustion - DOE sites US Universities Industry -10s of Gigabits per second Bulk data Guaranteed bandwidth PKI / Grid Climate Science - DOE sites US Universities International -5 PB per year 5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid High Energy Physics (LHC) 99.95+% (Less than 4 hrs/year) US Tier1 (DOE) US Tier2 (Universities) International (Europe, Canada) 10 Gbps60 to 80 Gbps (30-40 Gbps per US Tier1) Bulk data Remote control Guaranteed bandwidth Traffic isolation PKI / Grid
11
Science Drivers Science Areas / Facilities End2End Reliability Connectivity2006 End2End Band width 2010 End2End Band width Traffic Characteristics Network Services Magnetic Fusion Energy 99.999% (Impossible without full redundancy) DOE sites US Universities Industry 200+ Mbps 1 Gbps Bulk data Remote control Guaranteed bandwidth Guaranteed QoS Deadline scheduling NERSC- DOE sites US Universities Industry International 10 Gbps20 to 40 Gbps Bulk data Remote control Guaranteed bandwidth Guaranteed QoS Deadline Scheduling PKI / Grid NLCF- DOE sites US Universities Industry International Backbone Band width parity Backbone band width parity Bulk data Nuclear Physics (RHIC) - DOE sites US Universities International 12 Gbps70 Gbps Bulk data Guaranteed bandwidth PKI / Grid Spallation Neutron Source High (24x7 operation) DOE sites640 Mbps2 Gbps Bulk data Science Network Requirements Aggregation Summary (slide courtesy of ESNet)
12
CI Components Supercomputing / Cycles / Computational Supercomputing / Storage (Non-volatile) Interconnecting Networks (Campuses, Regionals, Backbones) Cyberinfrastructure Software Analysis / Visualization
13
CI Components Network Performance Infrastructure / Tools Middleware Control Plane …. Bulk Transport 2-Way Interactive Video Real-Time Communications Applications Applications call on Network Cyberinfrastructure …. Phoebus Network Cyberinfrastructure Measurement Nodes Control Plane Nodes
14
Internet2 Network CI Software Dynamic Circuit Control Infrastructure DRAGON (with ISI, MAX) Oscars (with ESnet) Middleware (Federated trust Infrastructure) Shibboleth Signet Grouper Comanage Performance Monitoring Infrastructure perfSONAR (with ESnet, GEANT2 JRA1, RNP, many others) BWCTL, NDT, OWAMP, Thrulay Distributed System Infrastructure Topology Service (with University of Delaware) Distributed Lookup Service (with University of Delaware, PSNC)
15
Internet2 Network CI Standardization Dynamic Circuit Control Protocol (IDC) DICE-Control, GLIF Measurement Schema / Protocol OGF NMWG IETF IPPM perfSONAR Consortium Middleware Arena Liberty Alliance OASIS Possible emerging corporate consortium Topology Schema / Protocol OGF NML-WG perfSONAR Consortium DICE-Control
32
Internet2’s CI Vision Internet2’s CI vision: Be a networking cyber-service provider Be a trust cyber-service provider Be a CI technology developer.
33
Internet2’s CI Position Internet2’s position: Backbone network provider Federated trust infrastructure provider Forum for collaboration by members of the R&E community Gives Internet2 a unique vision and strategy for Cyberinfrastructure.
34
Internet2’s CI Constituencies Collaborators University Members Regional Networks Regional CI Organizations High Performance Computing Centers Federal Partners International Partners CI Integrators
35
Early Thoughts: Internet2’s CI Strategy (1) Requirements Informed by our membership Agenda set by our governance mechanisms Offer, and in some cases develop, services and technology that are key components of a coherent CI software suite. CI-enhanced Networks: IP Network, Dynamic Circuit Network Services: InCommon, USHER New Technologies: DCN software, perfSONAR, Shibboleth Systems Integration: Assemble open source communication tools into a common veneer. Emphasize a systems approach towards CI.
36
Early Thoughts: Internet2’s CI Strategy (2) Take a “toolkit” approach Make sure it still looks like a wall jack to end user Push for best practices for campuses What to do How to do it Community learns as a whole / avoid reinventing the wheel Contribute to the support structure for use of CI Open source CI software Centers of Excellence Training
37
Early Thoughts: Internet2’s CI Strategy (3) Play the role of community CI coordinator, convening community conversations. Partner with other community coordinators (e.g. Teragrid, EDUCAUSE). Play a convening function in order to facilitate the development, use, and dissemination of CI Take a lead in international outreach efforts Facilitate conversations among various federal agencies (e.g. DOE, NSF, NIH), each of which is developing its own CI
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.