Download presentation
Presentation is loading. Please wait.
1
Joint Techs, Columbus, OH
DTF/ETF Update Joint Techs, Columbus, OH July 19, 2004 Linda Winkler
2
Objectives of TeraGrid Infrastructure
To provide an unprecedented increase in the computational capabilities available to the open research community, both in terms of capacity and functionality. To deploy a distributed “system” using Grid technologies rather than a “distributed computer” with centralized control, allowing the user community to map applications across the computational, storage, visualization, and other resources as an integrated environment. To create an “enabling cyber infrastructure” for scientific research in such a way that additional resources (at additional sites) can be readily integrated as well as providing a model that can be reused to create additional Grid systems that may or may not interoperate with TeraGrid.
3
TeraGrid Partner Sites
Phase 1 Production Jan 1, 2004; Phase 2 Production June 30, 2004 CACR at Caltech NCSA SDSC UC/ANL ETF-1 Production April 1, 2004 PSC ETF-2 Production October 1, 2004 Indiana University- will connect .166 TF compute, storage, viz, life science data sets deriving from Indiana Genomics Initiative Oak Ridge National Laboratory- is connecting its High Flux Isotope Reactor and Spallation Neutron Source instruments, as well as ORNL's Center for Computational Sciences Purdue University- will connect compute, storage, viz, and specialized instrumentation including the Purdue Terrestrial Observatory Texas Advanced Computing Center- brings access to high-end computers capable of 6.2 teraflops of compute power, terascale visualization system, 2.8-petabyte mass storage system and geoscience data collections
4
Timeline June 2002 ANL & NCSA 30G to CHI via IWIRE Sept 2002
Chicago hub established SDSC 30G to LA via CENIC Oct 2002 Qwest delivers 20G LA-CHI Jan 2003 LA hub established Apr 2003 CACR 30G to LA July 2003 PSC 30G to CHI via NLR 2004 work in progress ATL hub, IPGrid, ORNL, TACC
5
ETF/TeraGrid Network Today
600 W 7th 455 N. Cityfront Plaza (Qwest Fiber Collocation Facility) NLR Qwest Chicago Los Angeles DTF Backbone Core Switch Additional Sites And Networks Starlight 818 W 7th PSC Caltech SDSC NCSA ANL GSR Site Border Switch Application Gateways T640 T640 T640 T640 6509 Cluster Aggregation Switch E1200 E1200 E1200 E1200 Caltech Cluster SDSC Cluster NCSA Cluster ANL Cluster PSC Clusters 0.5 TF IA-64 IA32 Datawulf Sun Storage Server 86 TB Storage 4 TF IA-64 1.1 TF Power4 DB2 Server 500 TB Storage 10 TF IA-64 240 TB Storage 96 Visualization nodes 1.1 TF IA-64 20 TB Storage 6 TF EV68 0,3 TF EV7 shmem Storage Servver 225 TB Storage
6
ETF2/TeraGrid Topology Future
10 Gb/s ORNL 3*10Gb/s 3*10Gb/s 10 Gb/s Caltech PSC 3*OC192 3*OC192 4*OC192 OC192 LA CHI ATL 3*OC192 3*OC192 3*OC192 SDSC NCSA ANL OC192 3*10Gb/s 3*10Gb/s 3*10Gb/s 2*10GE TACC 10 Gb/s IPGrid 10 Gb/s 10 Gb/s PU IUPUI 20 Gb/s Cluster Agg Switch LEGEND Backplane Router IUB Linda Winkler
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.