William Y. B. Chang Senior Program Manager National Science Foundation and Thomas A. DeFanti, Maxine Brown Principal Investigators, STAR TAP NSF Cyberinfrastructure.

Slides:



Advertisements
Similar presentations
Tom DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Bill Nickless, Alan Verlo, Caren Litvanyi, Andy Schmidt STAR TAP Engineering.
Advertisements

Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
DOT: Distributed Optical Testbed Valerie Taylor, Joe Mambretti, Alok Choudhary, Peter Dinda Northwestern University Charlie Catlett, Bill Nickless, Linda.
University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
SACNAS, Sept 29-Oct 1, 2005, Denver, CO What is Cyberinfrastructure? The Computer Science Perspective Dr. Chaitan Baru Project Director, The Geosciences.
High Performance Computing Course Notes Grid Computing.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
EInfrastructures (Internet and Grids) US Resource Centers Perspective: implementation and execution challenges Alan Blatecky Executive Director SDSC.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
University of Illinois at Chicago Thomas A. DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Andy Schmidt, Bill Nickless, Alan Verlo.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
14 July 2000TWIST George Brett NLANR Distributed Applications Support Team (NCSA/UIUC)
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Supercomputing Center Jysoo Lee KISTI Supercomputing Center National e-Science Project.
IGrid Workshop: September 26-29, 2005 GLIF Meeting: September 29-30, 2005 Maxine Brown and Tom DeFanti, Co-Chairs Larry Smarr and Ramesh Rao, Hosts Calit2.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
CI Days: Planning Your Campus Cyberinfrastructure Strategy Russ Hobby, Internet2 Internet2 Member Meeting 9 October 2007.
University of Illinois at Chicago Who, Where, What, Why, How, and a little When Tom DeFanti October 7, 1999 ESnet/MREN Regional Grid Experimental NGI Testbed.
Cyberinfrastructure Planning at NSF Deborah L. Crawford Acting Director, Office of Cyberinfrastructure HPC Acquisition Models September 9, 2005.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
GLIF Global Lambda Integrated Facility Maxine Brown Electronic Visualization Laboratory University of Illinois at Chicago.
DOE 2000, March 8, 1999 The IT 2 Initiative and NSF Stephen Elbert program director NSF/CISE/ACIR/PACI.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
Spring 2003 Internet2 Meeting Cyberinfrastructure - Implications for the Future of Research Alan Blatecky ANIR National Science Foundation.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Perspectives on Cyberinfrastructure Daniel E. Atkins Professor, University of Michigan School of Information & Dept. of EECS October 2002.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
National Center for Supercomputing Applications Barbara S. Minsker, Ph.D. Associate Professor National Center for Supercomputing Applications and Department.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Tom DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Bill Nickless, Alan Verlo, Caren Litvanyi, Andy Schmidt STAR TAP Engineering.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
3 December 2015 Examples of partnerships and collaborations from the Internet2 experience Interworking2004 Ottawa, Canada Heather Boyles, Internet2
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
May Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO Indiana.
Thomas A. DeFanti, Maxine Brown Principal Investigators, STAR TAP/StarLight Linda Winkler, Bill Nickless, Alan Verlo, Caren Litvanyi STAR TAP Engineering.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
Digital Data Collections ARL, CNI, CLIR, and DLF Forum October 28, 2005 Washington DC Chris Greer Program Director National Science Foundation.
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
National Science Foundation Blue Ribbon Panel on Cyberinfrastructure Summary for the OSCER Symposium 13 September 2002.
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
Bob Jones EGEE Technical Director
Clouds , Grids and Clusters
Maxine Brown, Tom DeFanti, Joe Mambretti
Performance Technology for Scalable Parallel Systems
Presentation transcript:

William Y. B. Chang Senior Program Manager National Science Foundation and Thomas A. DeFanti, Maxine Brown Principal Investigators, STAR TAP NSF Cyberinfrastructure and Starlight Project

Information Technology Trends Basic hardware exponential growths continue Basic hardware exponential growths continue Processor speedProcessor speed Memory densityMemory density Disk storage densityDisk storage density Fiber channel bandwidthFiber channel bandwidth Growth in scale Growth in scale Clusters of processorsClusters of processors Channels per fiberChannels per fiber Distributed computing a realityDistributed computing a reality Thresholds Thresholds Terabytes of storage are locally affordable, petabytes are feasibleTerabytes of storage are locally affordable, petabytes are feasible Gigaflops of processing in lab, Teraflops on campus, 10+ TF for national centersGigaflops of processing in lab, Teraflops on campus, 10+ TF for national centers Software Software Standard protocols for computer communication Standards for data communication and storage Common operating system and programming language paradigms Commodity/Commercial and Scientific/Special High-End Commodity/Commercial and Scientific/Special High-End Major progress in commercial technology (hardware, operating environments, tools) Significant areas will not be addressed without direct action in scientific community

Components of CI-enabled science & engineering Collaboration Services Knowledge management institutions for collection building and curation of data, information, literature, digital objects High-performance computing for modeling, simulation, data processing/mining Individual & Group Interfaces & Visualization Physical World Humans Facilities for activation, manipulation and construction Instruments for observation and characterization. Global Connectivity

Operations in support of end users Development or acquisition Classes of activities Research in technologies, systems, and applications Applications of information technology to science and engineering research Cyberinfrastructure supporting applications Core technologies incorporated into cyberinfrastructure

Some roles of (cyber) infrastructure Processing, storage, connectivity Processing, storage, connectivity Performance, sharing, integration, etcPerformance, sharing, integration, etc Make it easy to develop and deploy new applications Make it easy to develop and deploy new applications Tools, services, application commonalityTools, services, application commonality Interoperability and extensibility enables future collaboration across disciplines Interoperability and extensibility enables future collaboration across disciplines Best practices, models, expertise Best practices, models, expertise Greatest need is software and experienced people Greatest need is software and experienced people

Shared Opportunity & Responsibility Only domain science and engineering researchers can create a vision and implement the methodology and process changes Only domain science and engineering researchers can create a vision and implement the methodology and process changes Information technologists need to be deeply involved Information technologists need to be deeply involved What technology can be, not what it isWhat technology can be, not what it is Conduct research to advance the supporting technologies and systemsConduct research to advance the supporting technologies and systems Applications inform researchApplications inform research Need hybrid teams across disciplines and job types. Need hybrid teams across disciplines and job types. Need participation from social scientists in design and evaluation of the CI enabled work environments. Need participation from social scientists in design and evaluation of the CI enabled work environments. Shared responsibility. Need mutual self-interest. Shared responsibility. Need mutual self-interest.

Cyberinfrastructure Opportunities LIGO ATLAS and CMS NVO and ALMA The number of nation-scale projects is growing rapidly! Climate Change

Network for Earthquake Engineering Simulation Field Equipment Laboratory Equipment Remote Users High- Performance Network(s) Instrumented Structures and Sites Leading Edge Computation Curated Data Repository Laboratory Equipment Global Connections

Futures: The Computing Continuum National Petascale Systems National Petascale Systems Ubiquitous Sensor/actuator Networks Ubiquitous Sensor/actuator Networks Laboratory Terascale Systems Laboratory Terascale Systems Ubiquitous Infosphere Collaboratories Responsive Environments Responsive Environments Terabit Networks Contextual Awareness Contextual Awareness Smart Objects Smart Objects Building Out Building Up Science, Policy and Education Petabyte Archives Petabyte Archives

Information Infrastructure is a First- Class Tool for Science Today

What does the Future Look Like ? Research Research Infrastructure Infrastructure People People Data Data Software Software Hardware Hardware Instruments Instruments Future infrastructure drives today’s research agenda Future infrastructure drives today’s research agenda

Instruments Picture of digital sky Knowledge from Data Sensors Picture of earthquake and bridge Wireless networks Personalized Medicine More Diversity, New Devices, New Applications

Bottom-line Recommendations NSF leadership for the Nation of an INITIATIVE to revolutionize science and engineering research capitalizing on new computing and communications opportunities. NSF leadership for the Nation of an INITIATIVE to revolutionize science and engineering research capitalizing on new computing and communications opportunities. 21 st Century Cyberinfrastructure includes supercomputing, massive storage, networking, software, collaboration, visualization, and human resources21 st Century Cyberinfrastructure includes supercomputing, massive storage, networking, software, collaboration, visualization, and human resources Current centers (NCSA, SDSC, PSC) and other programs are a key resource for the INITIATIVE.Current centers (NCSA, SDSC, PSC) and other programs are a key resource for the INITIATIVE. Budget estimate: incremental $650-$960 M/year (continuing).Budget estimate: incremental $650-$960 M/year (continuing).

Need Effective Organizational Structure An INITIATIVE OFFICE An INITIATIVE OFFICE Initiate competitive, discipline-driven path- breaking applications within NSF of cyberinfrastructure which contribute to the shared goals of the INITIATIVE.Initiate competitive, discipline-driven path- breaking applications within NSF of cyberinfrastructure which contribute to the shared goals of the INITIATIVE. Coordinate policy and allocations across fields and projects. Participants across NSF directorates, Federal agencies, and international e-science.Coordinate policy and allocations across fields and projects. Participants across NSF directorates, Federal agencies, and international e-science. Develop high quality middleware and other software that is essential and special to scientific research.Develop high quality middleware and other software that is essential and special to scientific research. Manage individual computational, storage, and networking resources at least 100x larger than individual projects or universities can provide.Manage individual computational, storage, and networking resources at least 100x larger than individual projects or universities can provide.

STAR TAP and StarLight STAR TAP: Premier operational cross- connect of the world's high-performance academic networks Mb STAR TAP: Premier operational cross- connect of the world's high-performance academic networks Mb StarLight: Next-generation cutting-edge optical evolution of STAR TAP connecting experimental networks 1-10Gb StarLight: Next-generation cutting-edge optical evolution of STAR TAP connecting experimental networks 1-10Gb Funded by NSF ANIR, EIA and ACIR infrastructure grants to UIC (and NU) Funded by NSF ANIR, EIA and ACIR infrastructure grants to UIC (and NU) Substantial support by Argonne MCS Substantial support by Argonne MCS

Who is StarLight? StarLight is jointly managed and engineered by: International Center for Advanced Internet Research (iCAIR), Northwestern University International Center for Advanced Internet Research (iCAIR), Northwestern University Joe Mambretti, David Carr and Tim WardJoe Mambretti, David Carr and Tim Ward Electronic Visualization Laboratory (EVL), University of Illinois at Chicago Electronic Visualization Laboratory (EVL), University of Illinois at Chicago Tom DeFanti, Maxine Brown, Alan Verlo, Jason LeighTom DeFanti, Maxine Brown, Alan Verlo, Jason Leigh Mathematics and Computer Science Division (MCS), Argonne National Laboratory Mathematics and Computer Science Division (MCS), Argonne National Laboratory Linda Winkler, Bill Nickless, Caren Litvanyi, Rick Stevens and Charlie CatlettLinda Winkler, Bill Nickless, Caren Litvanyi, Rick Stevens and Charlie Catlett

What is StarLight? Abbott Hall, Northwestern University’s Chicago downtown campus View from StarLight StarLight is an experimental optical infrastructure and proving ground for network services optimized for high-performance applications

StarLight Infrastructure StarLight is a large research-friendly co-location facility with space, power and fiber that is being made available to university and national/international network collaborators as a point of presence in Chicago

StarLight Infrastructure StarLight is a production GigE and trial 10GigE switch/router facility for high-performance access to participating networks

StarLight is Operational Equipment at StarLight StarLight Equipment installed: StarLight Equipment installed: Cisco 6509 with GigECisco 6509 with GigE IPv6 RouterIPv6 Router Juniper M10 (GigE and OC-12 interfaces)Juniper M10 (GigE and OC-12 interfaces) Cisco LS1010 with OC-12 interfacesCisco LS1010 with OC-12 interfaces Data mining cluster with GigE NICsData mining cluster with GigE NICs Visualization/video server cluster (on order)Visualization/video server cluster (on order) SURFnet’s GSR SURFnet’s GSR Multiple vendors for 1GigE, 10GigE, DWDM and Optical Switch/Routing in the future Multiple vendors for 1GigE, 10GigE, DWDM and Optical Switch/Routing in the future

StarLight TeraGrid, an NSF-funded Major Research Equipment initiative, has its Illinois hub located at StarLight.

Commercial StarLight SBC/AmeritechQwest Global Crossing AT&T and AT&T Broadband …coming soon, Level(3)

USA StarLight DoE ESnet DoE ESnet NASA NREN NASA NREN UCAID/Internet2 Abilene UCAID/Internet2 Abilene Metropolitan Research & Education Network (Midwest GigaPoP) Metropolitan Research & Education Network (Midwest GigaPoP)

StarLight Engineering Partnerships Developers of 6TAP, the IPv6 global testbed, notably ESnet and Viagenie (Canadian), have an IPv6 router installed at StarLight Developers of 6TAP, the IPv6 global testbed, notably ESnet and Viagenie (Canadian), have an IPv6 router installed at StarLight NLANR works with STAR TAP on network measurement and web caching; the NLANR AMP (Active Measurement Platform) is located at STAR TAP and the web cache is located at StarLight NLANR works with STAR TAP on network measurement and web caching; the NLANR AMP (Active Measurement Platform) is located at STAR TAP and the web cache is located at StarLight

StarLight Middleware Partnerships Forming Provide tools and techniques for (university) customer-controlled 10 Gigabit network flows Provide tools and techniques for (university) customer-controlled 10 Gigabit network flows Build general control mechanisms from emerging toolkits, such as Globus, for Grid network resource access and allocation services Build general control mechanisms from emerging toolkits, such as Globus, for Grid network resource access and allocation services Test a range of new tools, such as GMPLS and OBGP, for designing, configuring and managing optical networks and their components Test a range of new tools, such as GMPLS and OBGP, for designing, configuring and managing optical networks and their components Create a new generation of tools for appropriate monitoring and measurements at multiple levels Create a new generation of tools for appropriate monitoring and measurements at multiple levels

Proposed iGrid 2002 Demonstrations To date, 14 countries/locations proposing 29 demonstrations: Canada, CERN, France, Germany, Greece, Italy, Japan, The Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, United States To date, 14 countries/locations proposing 29 demonstrations: Canada, CERN, France, Germany, Greece, Italy, Japan, The Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, United States Applications to be demonstrated: art, bioinformatics, chemistry, cosmology, cultural heritage, education, high-definition media streaming, manufacturing medicine, neuroscience, physics, tele-science Applications to be demonstrated: art, bioinformatics, chemistry, cosmology, cultural heritage, education, high-definition media streaming, manufacturing medicine, neuroscience, physics, tele-science Grid technologies to be demonstrated: Major emphasis on grid middleware, data management grids, data replication grids, visualization grids, data/visualization grids, computational grids, access grids, grid portals Grid technologies to be demonstrated: Major emphasis on grid middleware, data management grids, data replication grids, visualization grids, data/visualization grids, computational grids, access grids, grid portals iGrid 2002 September 23-26, 2002, Amsterdam Science and Technology Centre (WTCW), The Netherlands