Presentation is loading. Please wait.

Presentation is loading. Please wait.

DoD HPC Modernization Program & Move Toward Emerging Architectures

Similar presentations


Presentation on theme: "DoD HPC Modernization Program & Move Toward Emerging Architectures"— Presentation transcript:

1 DoD HPC Modernization Program & Move Toward Emerging Architectures
Tom Dunn Naval Meteorology & Oceanography Command 20 November 2014

2 HPC RECENT TRENDS Per Top500 List
RECENT 2014 DOD ACQUISITIONS EXPECTED PROCESSOR COMPETITION ONWARD TOWARD EXASCALE

3 Navy DoD SUPERCOMPUTING RESOURCE CENTER
Peak Computational Performance (Teraflops) Estimates Follow Moore’s Law (~2x every 2 yrs) 1997 – TFs (Dec) – TFs 2001 – TFs (Jul) – 2,556 TFs TFs (Jul) – 5,760 TFs est. 2006 – TFs (Jul) –10,000 TFs est. 2008 – TFs 3

4 Navy DSRC Capabilities
One of the most capable HPC centers in the DoD and the nation Chartered as a DoD Supercomputing Center in 1994 Computational performance approximately doubles every two years; Currently 2,556 Teraflops Systems reside on the Defense Research and Engineering Network (DREN) with 10 Gb connectivity – 19 Dec 2013 15% of Navy DSRC’s computational and storage capacity reserved for CNMOC activities operational use R&D and CNMOC Ops are placed in separate system partitions and queues 4

5 Top500® Systems by Architecture, June 2006–June 2014

6 Number of CPUs in the Top500® Systems by Architecture Type, June 2006–June 2014

7 Number of Systems in the Top500® Utilizing Co-Processors or Accelerators, June 2009–June 2014
7

8 Number of Systems in the Top500® by Co-Processors or Accelerators Type, June 2009–June 2014
8

9 Number of Cores in the Top500® by Co-Processors or Accelerators Type, June 2011–June 2014
9

10 Number of Cores in the June 2014 Top500® by CPU Manufacturer
10

11 TOP 500 SUPERCOMPUTER LIST (JUNE 2014)
BY OEM Supplier TOP 500 CRAY INC 51 DELL 8 HEWLETT PACKARD 182 IBM 176 SGI 19 TOTAL 436 Other Suppliers 64

12 High Performance Computing Modernization Program 2014 HPC Awards
Feb. 2014 Air Force Research Lab (AFRL) DSRC, Dayton, OH Cray XC-30 System (Lightning) teraFLOPS - 56,880 Compute Cores (2.7 GHz Intel Ivy Bridge) - 32 NVIDIA Tesla K40 GPGPUs Navy DSRC, Stennis Space Center, MS Cray XC-30 (Shepard) - 813 teraFLOPS - 28,392 Compute Cores (2.7 GHz Intel Ivy Bridge) - 124 Hybrid nodes, each consisting of 10 Ivy Bridge cores and a 60 core Intel Xeon 5120D Phi Cray XC-30 (Armstrong) - 786 teraFLOPS - 29,160 Compute cores (2.7 GHz Intel Ivy Bridge) 12

13 High Performance Computing Modernization Program 2014 HPC Awards
September 2014 Army Research Lab (ARL) DSRC, Aberdeen, MD Cray XC-40 System petaFLOPS - 101,312 compute cores (2.3 GHz Intel Xeon Haswell) - 32 NVIDIA Tesla K40 GPGPUs - 411 TB memory - 4.6 PB storage Army Engineer Research Development Center (ERDC) DSRC, Vicksburg, MS SGI ICE X System petaFLOPS - 125,440 compute cores (2.3 GHz Intel Xeon Haswell) - 32 NVIDIA Tesla K40 GPGPUs - 440 TB memory PB storage 13

14 High Performance Computing Modernization Program 2014/2015 HPC Awards
Air Force Research Lab (AFRL) DSRC, Dayton, OH FY15 Funded OEM and Contract Award - TBD - 100,000+ compute cores - 3.5 – 5.0 petaFLOPS Navy DSRC, Stennis Space Center, MS 14

15 Weather & Climate Operational Supercomputing System (WCOSS)
ECMWF (Top 500 List Jun 2014) 2 Cray XC30 Systems each with 81,160 compute cores (2.7 GHz Intel Ivy Bridge) 1,796 teraFLOPS NOAA NWS/NCEP Weather & Climate Operational Supercomputing System (WCOSS) Phase I 2 IBM iDataplex systems each with 10,048 compute cores (2.6 GHz Intel Sandy Bridge) 213 teraFLOPS Phase II (Jan 2015) Addition 2 IBM NeXtScale systems each with 24,192 compute cores (2.7GHz Intel Ivy Bridge) 585 teraFLOPS 15

16 UK Meterological Office IBM Power 7 System 18,432 compute cores (3
UK Meterological Office IBM Power 7 System 18,432 compute cores (3.836 GHz) 565 teraFLOPS 15,360 compute cores (3.876 GHz) 471 teraFLOPS Oct 2014 Announcement 128M Contract 2 Cray XC-40 systems (Intel Xeon Haswell initially) >13 times faster than current system total of 480,000 compute cores Phase 1a replace Power 7s by Sep 2015 Phase 1b extend both systems to power limit by Mar 2016 Phase 1c add one new system by Mar 2017 16

17 Expected Near Term HPC Processor Options
2016 Intel and ARM - Cray has ARM in-house for testing 2017 - Intel, ARM, & IBM Power 9 (with closely coupled NVIDIA GPUs) 17

18 DoD Applications & Exascale Computing
General external impression In the 2024 timeframe, DoD will have no requirement for a balanced exascale supercomputer (untrue) DoD should not be a significant participant in exascale planning for the U.S. (untrue) Reality DoD has compelling coupled multi-physics problems which will require more tightly-integrated resources than technologically possible in the 2024 timeframe DoD has many other use cases which will benefit from the power efficiencies and novel technologies generated by the advent of exascale computing 18

19 HPCMP & 2024 DoD Killer Applications
HPCMP Categorizes Users Base into 11 Computational Technology Areas (CTAs) Climate Weather Ocean (CWO) is one of 11 CTAs Dr. Burnett (CNMOC TD) is the DoD HPCMP CWO CTA leader Each CTA leader tasked in FY14 to project Killer Apps in their CTA Dr. Burnett’s CWO CTA analysis lead by Lockheed Martin Primary focus is on HYCOM but includes NAVGEM, and ESPC Expect follow-on FY15 funding Develop appropriate Kiviat diagrams (example to follow) NRL Stennis part of an ONR sponsored NOPP project starting FY14 to look at attached processors (i.e. GPGPUs and accelerators) for HYCOM+CICE+WW3 19

20 Relevant Technology Issues
Classical computing advances may stall in the next 10 years 22nm (feature size for latest processors) 14nm (anticipated feature size in 2015) 5-7nm (forecast limit for classical methods) Recent 3D approaches currently used and dense 3D approaches contemplated, but have limitations Mean-time-between-failures (MTBF) will decrease dramatically Petascale (hours to days) Exascale (minutes) Data management exacale hurdles Power management exascale hurdles 20

21 Relevant Software Issues
Gap between intuitive coding (i.e. readily relatable to domain science) and high performance coding will increase Underpinnings of architectures will change more rapidly than codes can be refactored Parallelism of underlying mathematics will become asymptotic (at some point) despite the need to scale to millions [if not billions] of processing cores Current parallel code is based (in general) on synchronous communications; however, asynchronous methods may be necessary to overcome technology issues 21

22 Path Forward (Deliverables) [cont.]
Kiviat diagram conveying system architecture requirements for each impactful advent 22

23 March Toward Exascale Computing
Dept of Energy target for exascale in 2024 Japan target for exascale in 2020 (with $1B gov assistance) China target for exascale now in 2020 (originally in 2018) HPCMP’s systems expected in 7 or 8 years – 100 petaflops 23


Download ppt "DoD HPC Modernization Program & Move Toward Emerging Architectures"

Similar presentations


Ads by Google