Presentation is loading. Please wait.

Presentation is loading. Please wait.

NOAA R&D High Performance Computing Colin Morgan, CISSP High Performance Technologies Inc (HPTI) National Oceanic and Atmospheric Administration Geophysical.

Similar presentations


Presentation on theme: "NOAA R&D High Performance Computing Colin Morgan, CISSP High Performance Technologies Inc (HPTI) National Oceanic and Atmospheric Administration Geophysical."— Presentation transcript:

1 NOAA R&D High Performance Computing Colin Morgan, CISSP High Performance Technologies Inc (HPTI) National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory, Princeton, NJ

2 R&D HPCS Background Information Scientific Large Scale heterogeneous supercomputing architecture Provide cutting edge technology for weather and climate model developers Models are developed for weather forecasts, storm warnings and climate change forecasts 3 R&D HPCS Locations Princeton, NJ Gaithersburg, MD Boulder, CO

3 R&D HPCS Background Information Supercomputing Princeton, NJ – GFDL SGI Altix 4700 Cluster 8000 Cores 18PB of Data Gaithersbug, MD IBM Power6 Cluster ~1200 Power6 Cores 3PB of Data Boulder, CO – ESRL 2 Linux Clusters ~4000 Xeon Hapertown/Woodcrest Cores ~1PB of data Remote Computing Allocated Hours Oak Ridge National Labs – 104 Million Hours Argonne National Labs – 150 Million Hours NERSC – 10 Million Hours

4 R&D HPCS Information Data Requirements Current Data Requirements GFDL Current Data Capacity – 32PB GFDL Current Data Total – 18PB GFDL – A growth of 1PB every 2 months Remote Compute – 6-8TB a day of data ingest How does that much data get transferred? Future Data Requirements 30-50TB a day from remote computing 150-200PB in the next 3 years of total data

5 R&D HPCS Information Current Data Transfer Methods - BBCP BBCP – transfer rates are affected when file is being closed out SourceDestinationWindow SizeMTUFile SizeBBCP Transfer Rate Transfer Rate (Mbs) GFDLORNL10241500410MB11.5MBs92.0Mbs GFDLORNL10241500410MB14.4MBs115.2Mbs ORNLGFDL10241500410MB9.0MBs72.0Mbs ORNLGFDL10241500410MB9.8MBs78.4Mbs 400-500Mbs is the typical transfer rate, limited by Disk IO not the Network

6 LAN SWITCH ESNET PERIMETER FIREWALL Argonne VLAN1 VLAN2 NERSC VLAN3 10G ntt1 – receive host ntt2 – receive host ntt3 - IC0/IC9 pull host 1G NETAPP ntt1, ntt2, ntt3, ntt4 – RHEL5 IC0, IC9 – OpenSuse 10.1 Switches – Cisco 6500 series Firewall – Cisco FWSM PERIMETER SWITCH R&D CORE SWITCH/FW 10G SWITCH FABRIC CONNECTION 10G 20G 10G Require 6-8 TB/Day of inbound data ingest from ORNL ARL & NERSC do not have the same data ingest requirements GRIDFTP Servers Write Interface - Netapp Receive Interface - External MGMT VLAN PRIVATE VLAN IC0 IC09 100TB Disk Cache Oak Ridge SGI Cluster ntt4 ntt3 ntt2ntt1 R R RW R&D HPCS Information Future Data Transfer Methods - GRIDFTP

7 R&D HPCS Information Fiber Networking What are we doing now? What do we plan to do? ESNET Internet2 NLR MAX NyserNet Bison SDN 3ROX FRGP MagPi

8 R&D HPCS Information Current Connectivity Commodity Internet Internet2 ESNET Boulder, CO Gaithersburg, MD Princeton, NJ 45Mbs 1Gbs 10Gbs

9 R&D HPCS Information Network Connectivity

10 R&D HPCS Information Potential Future Networks Tooth Fairy $$ Working on preliminary designs Design review scheduled in early May Deployment in Q2 of FY10 Looking to talk with ESNET Internet2 National Lambda Rail Indiana University Global Noc Interested GigaPops The primary focus is to provide a High Speed Network to NOAA’s Research Facilities

11 R&D HPCS Information QUESTIONS?


Download ppt "NOAA R&D High Performance Computing Colin Morgan, CISSP High Performance Technologies Inc (HPTI) National Oceanic and Atmospheric Administration Geophysical."

Similar presentations


Ads by Google