Grids and Biology: A Natural and Happy Pairing Rick Stevens Director, Mathematics and Computer Science Division Argonne National Laboratory Professor,

Slides:



Advertisements
Similar presentations
Tom DeFanti, Maxine Brown Principal Investigators, STAR TAP Linda Winkler, Bill Nickless, Alan Verlo, Caren Litvanyi, Andy Schmidt STAR TAP Engineering.
Advertisements

Meeting Date: October 16, 2007 Topic: The 64-bit Question by Rick Heiges
Gigabyte Bandwidth Enables Global Co-Laboratories Prof. Harvey Newman, Caltech Jim Gray, Microsoft Presented at Windows Hardware Engineering Conference.
The Access Grid Ivan R. Judson 5/25/2004.
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
I-WIRE Background State Funded Infrastructure to support Networking and Applications Research $6.5M Total Funding $4M FY00-01 (in hand) $2.5M FY02 (approved.
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
OptIPuter Goal: Removing Bandwidth Barriers to e-Science ATLAS Sloan Digital Sky Survey LHC ALMA.
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
GLIF Global Lambda Integrated Facility Kees Neggers CCIRN Cairns Australia, 3 July 2004.
February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
1 APAN-China update Contents l Research and Education Networks in China l CERNET Background and Update l Peer connectivity with other R+E.
1 Optical network CERNET's experience and prospective Xing Li, Congxiao Bao
Asia-Pacific Advanced Network (APAN) August 31, 2001 Kazunori Konishi NOC Director
High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research Invited Presentation Sanford Consortium for Regenerative.
Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
The Drive Toward Dedicated IP Lightpipes for e-Science Applications OSAs 6th Annual Photonics & Telecommunications Executive Forum Panel on "Back to the.
"The OptIPuter: an IP Over Lambda Testbed" Invited Talk NREN Workshop VII: Optical Network Testbeds (ONT) NASA Ames Research Center Mountain View, CA August.
The OptIPuter Lambda Coupled Distributed Computing, Peer-to-Peer Storage, and Volume Visualization Dr. Larry Smarr, Director, California Institute for.
© 2007 Open Grid Forum Grids in the IT Data Center OGF 21 - Seattle Nick Werstiuk October 16, 2007.
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Digital Collections: Storage and Access Jon Dunn Assistant Director for Technology IU Digital Library Program
ASIA PACIFIC ADVANCED NETWORK MEETING BANGKOK 24 – 28 JANUARY 2005 GÉANT 2: A NEW APPROACH TO NETWORKS Dai Davies, General Manager DANTE.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
University of Illinois at Chicago Toward Terabits: Exploiting Network Speeds Faster than the Computers Attached to Them University of Illinois at Chicago/Northwestern.
AHM Overview OptIPuter Overview Third All Hands Meeting OptIPuter Project San Diego Supercomputer Center University of California, San Diego January 26,
ANL NCSA PICTURE 1 Caltech SDSC PSC 128 2p Power4 500 TB Fibre Channel SAN 256 4p Itanium2 / Myrinet 96 GeForce4 Graphics Pipes 96 2p Madison + 96 P4 Myrinet.
University of Illinois at Chicago StarLight Located in Northwestern’s Downtown Campus Dark Fiber to UIC Carrier POPs Chicago NAP NU UIC.
TeraGrid and I-WIRE: Models for the Future? Rick Stevens and Charlie Catlett Argonne National Laboratory The University of Chicago.
GigaPoP Transport Options: I-WIRE Positioning for the Bandwidth Tsunami Virtual Internet2 Member Meeting Oct 4, 2001 Linda Winkler Argonne National Laboratory.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
CSG - Research Computing Redux John Holt, Alan Wolf University of Wisconsin - Madison.
National Computational Science Alliance Tele-Immersion - The Killer Application for High Performance Networks Panel Talk at a Vanguard Meeting in San Francisco,
1-1.1 Sample Grid Computing Projects. NSF Network for Earthquake Engineering Simulation (NEES) 2004 – to date‏ Transform our ability to carry out research.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Abilene update IBM Internet2 Day July 26, 2001 Steve Corbató Director of Backbone Network Infrastructure.
TOPS “Technology for Optical Pixel Streaming” Paul Wielinga Division Manager High Performance Networking SARA Computing and Networking Services
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
MCS  FUTURESLABARGONNE  CHICAGO Rick Stevens, Terry Disz, Lisa Childers, Bob Olson Argonne National Laboratory
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
Realizing the Promise of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
University of Illinois at Chicago Lambda Grids and The OptIPuter Tom DeFanti.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research Larry Smarr Prof. Computer Science and Engineering Director, Calit2 (UC.
“OptIPuter: From the End User Lab to Global Digital Assets" Panel UC Research Cyberinfrastructure Meeting October 10, 2005 Dr. Larry Smarr.
TeraGrid Charlie Catlett, Director Pete Beckman, Chief Architect University of Chicago & Argonne National Laboratory June 2003 GGF-11 Honolulu, Hawaii.
Charlie Catlett UIUC/NCSA Starlight International Optical Network Hub (NU-Chicago) Argonne National Laboratory U Chicago IIT UIC.
Maxine Brown, Tom DeFanti, Joe Mambretti
Joint Techs, Columbus, OH
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Canada’s National Research and Education Network (NREN)
Sky Computing on FutureGrid and Grid’5000
Next Generation Abilene
GLIF Global Lambda Integrated Facility
TOPS “Technology for Optical Pixel Streaming”
A guided tour of the Access Grid
TOPS “Technology for Optical Pixel Streaming”
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

Grids and Biology: A Natural and Happy Pairing Rick Stevens Director, Mathematics and Computer Science Division Argonne National Laboratory Professor, Department of Computer Science Director, Computation Institute The University of Chicago

TeraGrid [40 Gbit/s] DWDM Wide Area Network NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE StarLight International Optical Peering Point (see Los Angeles San Diego DTF Backbone Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Solid lines in place and/or available by October 2001 Dashed I-WIRE lines planned for summer 2002

230 TB FCS SAN 500 TB FCS SAN 256 2p Madison 667 2p Madison Myrinet 128 2p Madison 256 2p Madison Myrinet ANL NCSA Caltech SDSCPSC 100 TB DataWulf TeraGrid in a Nutshell 32 Pentium4 52 2p Madison 20 2p Madison Myrinet 1.1 TF Power4 Federation CHILA 20 TB 96 GeForce4 Graphics Pipes 96 Pentium4 64 2p Madison Myrinet 4p Vis75 TB Storage 750 4p Alpha EV68 Quadrics 128p EV7 Marvel 16 2p (ER) Madison Quadrics 4 Lambdas ANL

OptIPuter Hosting on TeraGrid One way to think of TeraGrid is a hosting environment to support virtual OptIPuters OptIPuter can be viewed as an overlay on TeraGrid resources combining a collection of compute, data and visualization resources via optical networking.

Super Networks Enable New Applications Database blasting Gimme the whole thing and Ill sort it out at my end On-Demand caching for collaborative analysis Spontaneous large-scale tuple spaces.. minning rabbit trails Streaming tiled-displays (>100 Megapixel displays) Super Access Grid.. Lets watch all the digital streams in parallel Deep images Multiresolution, navigable images, movies, gimme all the versions in all the formats

Super Network Applications Trickle charging shared petabyte databases Always up-to-date repositories (mirrors, etc.) Persistent objects and cooperative computation that extend persistent objects Gimme all versions of that object that ever existed (dump the cvs repo) Spontaneous peer-to-peer infrastructure Match the structure of infrastructure to the structure of social networks

Super Networks Require New Architectures Super high-bandwidth systems have high concurrency Tbps 100x10 Gpbs, 25x40 Gbps etc. Large bandwidth delay products Non-intuitive chunking, based on compute/bdp Large-granularity in transactions (see previous page) Stream processing architectures Flow oriented services become increasingly important

Super Network Architecture Implications Speculation Super Networks create a computing and storage commons Increased used of speculation becomes possible Resource (commons) management Personal infrastructure complexity will become a major SW problem Extensions of OS concepts needed