Global Lambda Integrated Facility (GLIF)

Slides:



Advertisements
Similar presentations
University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
Advertisements

TransLight/StarLight Tom DeFanti (Principal Investigator) Maxine Brown (Co-Principal Investigator) Joe Mambretti (Investigator) Alan Verlo, Linda Winkler.
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
GLIF Global Lambda Integrated Facility Kees Neggers CCIRN Cairns Australia, 3 July 2004.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
AHM Overview OptIPuter Overview Third All Hands Meeting OptIPuter Project San Diego Supercomputer Center University of California, San Diego January 26,
Global Lambda Integrated Facility (GLIF) Kees Neggers SURFnet Internet2 Fall meeting Austin, TX, 27 September 2004.
National and International Networking Infrastructure and Research June 13, 2003 Mari Maeda NSF/CISE/ANIR.
Optical networking research in Amsterdam Paola Grosso UvA - AIR group.
Global Lambda Integrated Facility (GLIF) CANS 2004, November 30, 2004 Maxine D. Brown Associate Director, Electronic Visualization Laboratory Co-Principal.
Kees Neggers Managing Director SURFnet GLIF, the Global Lambda Integrated Facility TERENA Networking Conference 6-9 June 2005, Poznan, Poland.
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
SURFnet and the LHC Erik-Jan Bos Director of Network Services, SURFnet Co-chair of GLIF TEC LHC T0/1 Network Meeting, Amsterdam January 21, 2005.
ICFA HEP Grid and Digital Divide Workshop May 2005 Daegu, Korea George McLaughlin Director, International Developments AARNet Kees Neggers Managing.
IGrid Workshop: September 26-29, 2005 GLIF Meeting: September 29-30, 2005 Maxine Brown and Tom DeFanti, Co-Chairs Larry Smarr and Ramesh Rao, Hosts Calit2.
Global Lambda Integrated Facility Cees de Laat University of Amsterdam
“Global LambdaGrid Applications Driving Innovation" Acceptance Speech for Maxine Brown and Tom DeFanti Innovation Award for Experimental / Developmental.
1 Update Jacqueline Brown University of Washington APAN Meeting Busan, Korea 29 August 2003.
E-VLBI at ≥ 1 Gbps -- “unlimited” networks? Tasso Tzioumis Australia Telescope National Facility (ATNF) 4 November 2008.
Pacific Wave and Its Role in Research and Education Networks Jan Eveleth Managing Director, Pacific Northwest Gigapop Quilt Peering Workshop St. Louis,
International Exchanges Current and Future Jim Dolgonas President and Chief Operating Officer CENIC April 4, 2006.
International Task Force Meeting March 7, a.m. to noon Washington, DC.
Next Generation Peering for Next Generation Networks Jacqueline Brown Executive Director International Partnerships Pacific Northwest Gigapop CANS2004,
International eScience Infrastructure Bill St. Arnaud
Latest Gigabit / Lambda Networking - Asia - Pacific and World Kilnam Chon.
LambdaGRID the NREN (r)Evolution Kees Neggers Managing Director SURFnet Reykjavik, 26 August 2003.
09-Sept-2004 NeSC UKLight Town Meeting Peter Clarke, UKLight Town Meeting Welcome, background and & programme for the day Peter Clarke.
Kees Neggers SURFnet SC2003 Phoenix, 20 November 2003.
CA*net 4 Optical Update Bill St. Arnaud CANARIE Inc –
Developing a North American Global LambdaGrid Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E.
GLIF Global Lambda Integrated Facility Maxine Brown Electronic Visualization Laboratory University of Illinois at Chicago.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
20 October 2015 Internet2 International Activities Heather Boyles Director, International Relations, Internet2 Internet2 Industry Strategy Council Meeting.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
PacificWave Update John Silvester University of Southern California Chair, CENIC Internet2 - ITF, Philadelphia,
Tom DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Bill Nickless, Alan Verlo, Caren Litvanyi, Andy Schmidt STAR TAP Engineering.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
6 December 2015 US update for ITF Heather Boyles,
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
GOLE and Exchange Architectures John Silvester Professor of Electrical Engineering, USC Board Member, CENIC PI, TransLight/PacificWave (NSF-OCI-IRNC)
SURFnet 6 NetherLight and GLIF Kees Neggers Managing Director SURFnet Questnet/APAN Cairns Australia, July 5th, 2004.
Optical Networks and eVLBI Bill St. Arnaud
CENIC meeting May 2001 Internet2 international program Heather Boyles
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
CA*net3 - International High Performance Connectivity 9th Internet2 Member Meeting Mar 9, Washington, DC tel:
PRESENTATION DATEEXPReS- TITLE OF YOUR PRESENTATIONSlide #1 What is EXPReS? EXPReS = Express Production Real-time e-VLBI Service Three year project (March.
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
TransLight/StarLight, GLIF and OptIPuter
Maxine Brown, Tom DeFanti, Joe Mambretti
APAN Lambda Networking
The SURFnet Project Bram Peeters, Manager Network Services
Canada’s National Research and Education Network (NREN)
Global Lambda Integrated Facility
Heather Boyles, US update for ITF Heather Boyles, 2 December 2018.
Internet2 101 US R&E Networking From the ARPANET to Today
Next Generation Abilene
GLIF Global Lambda Integrated Facility
TOPS “Technology for Optical Pixel Streaming”
TOPS “Technology for Optical Pixel Streaming”
Update on International Connections and Internet2
Wide-Area Networking at SLAC
Optical Networking Activities in NetherLight
Presentation transcript:

Global Lambda Integrated Facility (GLIF) CANS 2004, November 30, 2004 Maxine D. Brown Associate Director, Electronic Visualization Laboratory Co-Principal Investigator, STAR TAP/StarLight Co-Principal Investigator, Euro-Link/TransLight University of Illinois at Chicago

What is GLIF? GLIF is a consortium of institutions, organizations, consortia and country National Research & Education Networks who voluntarily share optical networking resources and expertise to develop the Global LambdaGrid for the advancement of scientific collaboration and discovery GLIF is under the leadership of SURFnet and University of Amsterdam in The Netherlands. www.glif.is

Why GLIF? Motivations, Starting in 2001 Scientific: All science is global. Political: A neutral forum in which to collaborate with colleagues worldwide to build a production quality Global LambdaGrid in support of e-science experiments. Economic: As the cost of transoceanic bandwidth continues to become more affordable, National Research Networks have additional capacity they are willing to make available for use by application scientists, computer scientists and engineers. Technical: Need to interconnect and interoperate production quality infrastructure for scientific experiments.

What is the LambdaGrid? Today’s Grids enable scientists to schedule computer resources and remote instrumentation over today’s “best effort” networks. LambdaGrids enable scientists to also schedule bandwidth. Wave Division Multiplexing (WDM) technology divides white light into individual wavelengths (or “lambdas”) on optical fiber, creating parallel networks. LambdaGrids provide deterministic networks with known and knowable characteristics. Guaranteed Bandwidth (data movement) Guaranteed Latency (collaboration, visualization, data analysis) Guaranteed Scheduling (remote instruments)

The Next International Optical Network According to GLIF University Dept TransLight CERN Commodity Internet NRNs University University NLR GigaPOP GigaPOP University University eVBLI Source: Bill St. Arnaud

Global Lambda Integrated Facility World Map – December 2004 Predicted international Research & Education Network bandwidth, to be made available for scheduled application and middleware research experiments by December 2004. Visualization courtesy of Bob Patterson, NCSA. www.glif.is

Global Lambda Integrated Facility Predicted Bandwidth for Scheduled Experiments, December 2004 Visualization courtesy of Bob Patterson, NCSA. www.glif.is

Global Lambda Integrated Facility Predicted Bandwidth for Scheduled Experiments, December 2004 Visualization courtesy of Bob Patterson, NCSA. www.glif.is

Steps Leading Up to GLIF Creation of international Open Exchanges StarLight (2001) NetherLight (2001) Cooperation among institutions, organizations, consortia and country National Research Networks who voluntarily share optical networking resources and expertise for the advancement of scientific collaboration and discovery LambdaGrid meetings in 2001, 2002, 2003, 2004 Collaborations among discipline scientists, computer scientists, network engineers iGrid 1998, iGrid 2000, iGrid 2002 SC conferences Etc.

Early Creation of International Open Exchanges (examples)

STAR TAP Science Technology And Research Transit Access Point NTT/EVL/IML Presentation NSF funded the development of STAR TAP in 1997 to provide a persistent infrastructure for the long-term interconnection and interoperability of advanced international networking, in support of applications, performance measuring, and technology evaluations. By 2000, STAR TAP became a model for Next-Generation Internet eXchanges (NGIXs). University of Illinois at Chicago, June 12, 1998

Open Exchange “By Researchers For Researchers” StarLight is a 1GE and 10GE switch/router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. Abbott Hall, Northwestern University’s Chicago downtown campus View from StarLight

NetherLight Open Exchange NTT/EVL/IML Presentation NetherLight Open Exchange Stockholm NorthernLight Chicago SURFnet 10 Gbit/s 10 Gbit/s 2.5 Gbit/s NSF 10 Gbit/s SURFnet 10 Gbit/s Amsterdam New York MANLAN Dwingeloo ASTRON/JIVE DWDM SURFnet IEEAF 10 Gbit/s 10 Gbit/s SURFnet 10 Gbit/s 10 Gbit/s London UKLight Geneva CERN Prague CzechLight University of Illinois at Chicago, June 12, 1998

Actual TransLight *Lambdas Today: TransLight Governance Ends in 2004; Supports GLIF European lambdas to US (red) –10Gb Amsterdam—Chicago –10Gb London—Chicago –10Gb Amsterdam—NYC Canadian lambdas to US (white) –30Gb Chicago-Canada-NYC –30Gb Chicago-Canada-Seattle US sublambdas to Europe (grey) –6Gb Chicago—Amsterdam Japan JGN II lambda to US (cyan) –10Gb Chicago—Tokyo European lambdas (yellow) –10Gb Amsterdam—CERN –2.5Gb Prague—Amsterdam –2.5Gb Stockholm—Amsterdam –10Gb London—Amsterdam IEEAF lambdas (blue) –10Gb NYC—Amsterdam –10Gb Seattle—Tokyo CAVEWave/PacificWave (purple) –10Gb Chicago—Seattle –10Gb Seattle—LA—San Diego Northern Light UKLight Japan CERN

Cooperation to Share Optical Networking Resources and Expertise

GLIF History Invitation-only annual meetings to discuss optical networking and the Global LambdaGrid. 2001 in Amsterdam, hosted by the Trans-European Research and Education Networking Association (TERENA, Europe) 2002 in Amsterdam, hosted by the Amsterdam Science and Technology Centre 2003 in Reykjavik, Iceland, hosted by NORDUnet 2004 in Nottingham, UK, hosted by UKERNA 2002

GLIF 2004: 60 World Leaders in Advanced Networking and the Scientists Who Need It Photo courtesy of Steve Wallace

GLIF 2004 Who’s Who Australia’s Research and Education Network (AARNet) CANARIE (Canada) CERN CESNET (Czech Republic) Chinese Academy of Science DANTE/GÉANT (Europe) European Commission HEAnet (Ireland) Japanese Gigabit Network 2 (JGN-II) Korea Institute of Science and Technology Information (KISTI)/KREONet2 National Center for High Performance Computing (NCHC, Taiwan) National Institute of Advanced Industrial Science and Technology (AIST, Japan)

GLIF 2004 Who’s Who NORDUnet (Nordic countries) SURFnet/NetherLight (The Netherlands) Trans-European Research and Education Networking Association (TERENA, Europe) UK Joint Information Systems Committee (JISC) UKERNA/UKLight (United Kingdom) WIDE (Japan) USA National LambdaRail, Internet2, DoE ESnet, TeraGrid, Illinois’ I-WIRE initiative, California’s CENIC network, NSF StarLight, NSF High Performance International Internet Services awardees (Euro-Link, TransPAC, GLORIAD and AMPATH), major GigaPoPs (PNWGP and Pacific Wave, MREN, MAX) the Internet Educational Equal Access Foundation (IEEAF), and major universities and government laboratories.

GLIF Working Groups Governance: To create an open, neutral community for anyone who wants to contribute resources and/or services (bandwidth, software, application drivers), to build the Global LambdaGrid Engineering: To define the types of links and the minimum/maximum configurations of Optical Exchange facilities in order to assure the interoperability and interconnectivity of participating networks Applications: To enable the super-users providing the application drivers; to find new e-science drivers; and, to move scientific experiments into production usage as they mature, and to document these advancements Control Plane and Grid Integration Middleware (proposed): To agree on the interfaces and protocols for lambda provisioning and management

Collaborations Among Discipline Scientists, Computer Scientists, Network Engineers

iGrid 1998 at SC’98 November 7-13, 1998, Orlando, Florida, USA NTT/EVL/IML Presentation iGrid 1998 at SC’98 November 7-13, 1998, Orlando, Florida, USA 10 countries: Australia, Canada, Germany, Japan, Netherlands, Russia, Singapore, Switzerland, Taiwan, USA 22 demonstrations featured technical innovations and application advancements requiring high-speed networks, with emphasis on remote instrumentation control, tele-immersion, real-time client server systems, multimedia, tele-teaching, digital video, distributed computing, and high-throughput, high-priority data transfers The SC’98 International Grid (iGrid) research demonstrations showcase international collaborations using advanced high-speed networks, enabling researchers to work together, whether their colleagues live across the country or across the ocean, and to access geographically-distributed computing, storage, and display resources. The iGrid booth, jointly sponsored by the Electronic Visualization Laboratory at the University of Illinois at Chicago and Indiana University, provides global connectivity so collaborators from the United States, Australia, Canada, Germany, Japan, The Netherlands, Russia, Singapore, Switzerland, and Taiwan can demonstrate their use of advanced networks to solve complex computational problems. Enabling these international demonstrations is the National Science Foundation (NSF) sponsored initiative STAR TAP—the Science, Technology And Research Transit Access Point [www.startap.net]. Started in 1997, STAR TAP anchors the NSF vBNS (very high-speed Backbone Network Service) international program. Networks from Canada (CAnet-2), Singapore (SingaREN), Taiwan (TANet), Russia (MirNET), and the Asian-Pacific Advanced Network consortium (APAN) are connected; Indiana University is the lead institution of the APAN-USA joint venture, named TransPAC [www.transpac.org]. STAR TAP connections to the Nordic countries (NORDUnet), The Netherlands (SURFnet), France (RENATER), and Israel are imminent. Other USA federal agency advanced networks, notably the Department of Defense DREN, Department of Energy ESnet, and NASA NREN are also connected to STAR TAP. STAR TAP is managed by the Electronic Visualization Laboratory, Argonne National Laboratory, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, and Chicago’s Ameritech Advanced Data Services. The Electronic Visualization Laboratory and Indiana University, members of NCSA’s National Computational Science Alliance partnership, are working to advance the development of the International Technology Grid. The Grid is a prototype 21st century computational and information infrastructure integrating high-performance computers, visualization environments, remote instruments, and massive databases via high-speed networks to support advanced applications. www.startap.net/igrid98 University of Illinois at Chicago, June 12, 1998

iGrid 2000 at INET 2000 July 18-21, 2000, Yokohama, Japan NTT/EVL/IML Presentation iGrid 2000 at INET 2000 July 18-21, 2000, Yokohama, Japan 14 regions: Canada, CERN, Germany, Greece, Japan, Korea, Mexico, Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, USA 24 demonstrations featuring technical innovations in tele-immersion, large datasets, distributed computing, remote instrumentation, collaboration, streaming media, human/computer interfaces, digital video and high-definition television, and grid architecture development, and application advancements in science, engineering, cultural heritage, distance education, media communications, and art and architecture 100Mb transpacific bandwidth carefully managed The SC’98 International Grid (iGrid) research demonstrations showcase international collaborations using advanced high-speed networks, enabling researchers to work together, whether their colleagues live across the country or across the ocean, and to access geographically-distributed computing, storage, and display resources. The iGrid booth, jointly sponsored by the Electronic Visualization Laboratory at the University of Illinois at Chicago and Indiana University, provides global connectivity so collaborators from the United States, Australia, Canada, Germany, Japan, The Netherlands, Russia, Singapore, Switzerland, and Taiwan can demonstrate their use of advanced networks to solve complex computational problems. Enabling these international demonstrations is the National Science Foundation (NSF) sponsored initiative STAR TAP—the Science, Technology And Research Transit Access Point [www.startap.net]. Started in 1997, STAR TAP anchors the NSF vBNS (very high-speed Backbone Network Service) international program. Networks from Canada (CAnet-2), Singapore (SingaREN), Taiwan (TANet), Russia (MirNET), and the Asian-Pacific Advanced Network consortium (APAN) are connected; Indiana University is the lead institution of the APAN-USA joint venture, named TransPAC [www.transpac.org]. STAR TAP connections to the Nordic countries (NORDUnet), The Netherlands (SURFnet), France (RENATER), and Israel are imminent. Other USA federal agency advanced networks, notably the Department of Defense DREN, Department of Energy ESnet, and NASA NREN are also connected to STAR TAP. STAR TAP is managed by the Electronic Visualization Laboratory, Argonne National Laboratory, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, and Chicago’s Ameritech Advanced Data Services. The Electronic Visualization Laboratory and Indiana University, members of NCSA’s National Computational Science Alliance partnership, are working to advance the development of the International Technology Grid. The Grid is a prototype 21st century computational and information infrastructure integrating high-performance computers, visualization environments, remote instruments, and massive databases via high-speed networks to support advanced applications. www.startap.net/igrid2000 University of Illinois at Chicago, June 12, 1998

iGrid 2002 Application Demonstrations 28 demonstrations from 16 countries: Australia, Canada, CERN, France, Finland, Germany, Greece, Italy, Japan, Netherlands, Singapore, Spain, Sweden, Taiwan, the United Kingdom and the USA. Applications demonstrated: art, bioinformatics, chemistry, cosmology, cultural heritage, education, high-definition media streaming, manufacturing, medicine, neuroscience, physics, tele-science Grid technologies demonstrated: Major emphasis on grid middleware, data management grids, data replication grids, visualization grids, data/visualization grids, computational grids, access grids, grid portals 25Gb transatlantic bandwidth (100Mb/attendee, 250x iGrid2000!) www.startap.net/igrid2002

Electronic Very-Long Baseline Interferometry (eVLBI) 22 September 2004 – Creating a Global Virtual Telescope Traditional VLBI VLBI is a technique used by radio astronomers to image the sky in detail. Arrays of telescopes across countries/continents record data onto tape and then ship the tapes to a central processing facility (correlator) for analysis. The resulting image has a resolution equal to that of a telescope as large as the maximum antenna separation. E-VLBI links telescopes electronically, enabling real-time analysis. A 20-hour long observation of the star IRC+10420, heading toward a cataclysmic “supernova” explosion, involved telescopes in the UK, Sweden, the Netherlands, Poland and Puerto Rico. The maximum separation of the antennas was 8200 km, giving a resolution of 20 milliarcseconds (mas); about 5 times better than the Hubble Space Telescope. The antenna at Arecibo, in Puerto Rico, increased the sensitivity of the telescope array by a factor of 10. Each telescope was connected to its country’s NREN and the data routed at 32 Mbps per telescope through GÉANT to SURFnet. The data was then sent to the Joint Institute for VLBI in Europe (JIVE) in the Netherlands (6Gbps link from NetherLight), the central processing facility for the European VLBI Network (EVN). 9 Terabits of data were correlated and the resulting image returned to the participating sites. www.evlbi.org

LOFAR Lambdas as part of research instruments NTT/EVL/IML Presentation LOFAR Lambdas as part of research instruments LOFAR is a modern radio telescope that uses simple omni-directional antennas instead dish antennas. (Basic radio telescope technology has not changed since the 1960's. A telescope 100x larger than existing instruments is therefore unaffordable.) A more sensitive telescope will see stars, galaxies, black holes and other objects that are farther away. LOFAR is an IT-telescope – electronic signals from the antennas are digitized, transported to a central digital processor, and combined in software to emulate a conventional antenna. The Dutch have funded phase 1; 15,000 antennas and maximum baselines of 100 km will be built. Data transport requirements are in the range of many Tbps. Many data collection points Processing in Groningen Large datasets distributed to many destinations in The Netherlands and abroad www.lofar.org University of Illinois at Chicago, June 12, 1998

SC 2004 Data Reservoir Project – University of Tokyo Fujitsu Computer Technologies LTD WIDE NTT Communications SARA University of Amsterdam Land Speed Record: The world’s longest 10Gbps circuit ever recorded – 31,248 km circuit between Pittsburgh and CERN via Tokyo. This demonstration achieved a single stream TCP payload of  7.2 Gbps – for a new record of 224,985 terabit kilometer / second. This represents a collaboration among WIDE, APAN/JGN-II, NetherLight, SURFnet, StarLight, CANARIE, PNWGP, IEEAF, SCinet http://www.supercomputingonline.com/article.php?sid=7514

SC 2004 Computing the Quantum Universe California Institute of Technology (Caltech) Stanford Linear Accelerator Center (SLAC) Fermi National Accelerator Laboratory CERN University of Florida Florida International University National LambdaRail and Abilene University of Manchester University College London and UKLight University of London Rio de Janeiro State University (Brazil) State University of Sao Paolo Academic Network of Sao Paulo (ANSP) Kyungpook National University (Korea) Bandwidth Challenge for Sustained Bandwidth: Transferred data in and out of the SC site in Pittsburgh at a rate of 101.13 Gbps, roughly equivalent to transferring the contents of three DVDs in just one second. Seven 10Gbps links were connected to the Caltech booth and three 10Gbps links to the SLAC/FermiLab booth. (Four dedicated NLR waves from the show floor to Los Angeles (2 waves), Chicago, and Jacksonville, as well as three 10Gbps connections to Abilene, the TeraGrid and ESnet. Also, 2.5Gbps from Pittsburgh to Miami (via Abilene) and to Brazil. http://supercomputing.fnal.gov

SC 2004: The OptIPuter Scalable Adaptive Graphics Environment UIC Electronic Visualization Laboratory UCSD National Center for Microscopy and Imaging Research UCSD Scripps Institution of Oceanography USGS Earth Resources Observation Systems Data Center University of Amsterdam National LambdaRail SAGE used optical networking to retrieve very-high-resolution 2D and 3D datasets from servers in San Diego and Amsterdam, render them on clusters in Chicago, and display them in Pittsburgh. www.evl.uic.edu/cavern/sage, www.optiputer.net

SC 2004: The OptIPuter Trans-Pacific HDTV UCSD National Center for Microscopy and Imaging Research Biomedical Informatics Research Network Osaka University KDDI R&D Laboratories National LambdaRail HDTV from the world's largest microscope in Japan is streamed live to SC 2004 and San Diego while being controlled by project scientists in San Diego. High-quality HDTV is essential for resolving useful information. Dedicated wavelenghs offer lower latencies and control of network jitter, especially important in large streams of video data. http://ncmir.ucsd.edu

SC 2004: The OptIPuter UDT Fairness and Friendliness UIC National Center for Data Mining Johns Hopkins University, Department of Physics and Astronomy SARA Computing and Networking Services, Amsterdam, The Netherlands National LambdaRail UDP-based Data Transfer protocol (UDT) is a very fast and efficient transport protocol, used to transfer data over long distances. UDT’s fairness and friendliness properties are visually displayed on a web interface. http://demos.dataspaceweb.net

SC 2004: Sloan Digital Sky Survey UIC National Center for Data Mining Johns Hopkins University, Department of Physics and Astronomy Tokyo SARA Computing and Networking Services, Amsterdam, The Netherlands Bandwidth Challenge 3rd Place: 1.5 Terabytes of Sloan Digital Sky Survey (SDSS) data (release 3) was sent via network instead of shipping disks and computers. Using the UDT transport protocol, data was transferred to disk at 1.6 Gbps between two nodes at SC in Pittsburgh and two nodes in Tokyo (disk I/O bottleneck). Data was also moved memory to memory over a larger cluster at 16Gbps (out of 20Gbps) to nodes in Tokyo and Amsterdam. Next month, data will be sent from Chicago to Japan, Korea and Australia. Clusters are part of an NSF/Army sponsored Teraflow Testbed. www.sdss.org, http://demos.dataspaceweb.net

SC 2004: Multi-Gigabit Interactive Video Transmission AARNet (Australia's Academic and Research Network) ResearchChannel University of Washington/ Pacific Northwest GigaPoP University of Hawaii NLR This is the first demonstration of high-definition uncompressed interactive video interaction across the Pacific, between Canberra, Australia, and SC 2004 in Pittsburgh, at 1.4Gbps in each direction. Future expansion of AARNet is planned to offer these and additional resources to other continents, and bring scientists and researchers together by exploiting new Internet technologies. www.aarnet.edu.au, www.researchchannel.org

iGrid 2005 September 26-30, 2005 in San Diego Grid 2oo5 T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y University of California, San Diego California Institute for Telecommunications and Information Technology [Cal-(IT)2]

The Future of GLIF 2005 at UCSD, hosted by Cal-(IT)2 in conjunction with iGrid 2005 2006 in Japan, hosted by the WIDE Project (Jun Murai) and JGN-II (Tomonori Aoyama)

Thanks to NSF and Colleagues For These International Networking Opportunities StarLight/Euro-Link/TransLight planning, research, collaborations, and outreach efforts are made possible, in major part, by funding from: National Science Foundation (NSF) awards SCI-9980480, SCI-9730202, CNS-9802090, CNS-9871058, SCI-0225642, and CNS-0115809 State of Illinois I-WIRE Program, and major UIC cost sharing Northwestern University for providing space, power, fiber, engineering and management Pacific Wave, StarLight, National LambdaRail, CENIC, PNWGP, CANARIE, SURFnet, UKERNA, and IEEAF for Lightpaths DoE/Argonne National Laboratory for StarLight and I-WIRE network engineering and design