Slide-1 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001.

Slides:



Advertisements
Similar presentations
The HPEC Challenge Benchmark Suite
Advertisements

Technology Drivers Traditional HPC application drivers – OS noise, resource monitoring and management, memory footprint – Complexity of resources to be.
Large Aircraft Survivability Initiative (LASI) Programmatic Overview Briefing for the: 4 th Triennial Aircraft Fire & Cabin Safety International Research.
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Current impacts of cloud migration on broadband network operations and businesses David Sterling Partner, i 3 m 3 Solutions.
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
4.1.5 System Management Background What is in System Management Resource control and scheduling Booting, reconfiguration, defining limits for resource.
Slide-1 What Makes HPC Applications Challenging MITRE ISIMIT Lincoln Laboratory Benchmarking Working Group Session Agenda 1:00-1:15David KoesterWhat Makes.
Implementation methodology for Emerging Reconfigurable Systems With minimum optimization an appreciable speedup of 3x is achievable for this program with.
Smart Grid - Cyber Security Small Rural Electric George Gamble Black & Veatch
Parallel File System Benchmarking Project Tracey Wilson DICE, Program Manager CSC
“This research was, in part, funded by the U.S. Government. The views and conclusions contained in this document are those of the authors and should not.
1 WRF Development Test Center A NOAA Perspective WRF ExOB Meeting U.S. Naval Observatory, Washington, D.C. 28 April 2006 Fred Toepfer NOAA Environmental.
Hackystat and the DARPA High Productivity Computing Systems Program Philip Johnson University of Hawaii.
RBG 6/9/ Reactive Processing Jon Hiller Science and Technology Associates, Inc. For Dr. William Harrod DARPA IPTO 24 May 2005.
1. 2 SIX SIGMA "Delivering Tomorrow's Performance Today" AIR CDRE ABDUL WAHAB.
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
This is a work of the U.S. Government and is not subject to copyright protection in the United States. The OWASP Foundation OWASP AppSec DC October 2005.
CMSC 611: Advanced Computer Architecture Performance Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Whitacre College of Engineering Panel Interdisciplinary Cybersecurity Education Texas Tech University NSF-SFS Workshop on Educational Initiatives in Cybersecurity.
QAD's Customer Engagement Dan Blake Consultancy Development Director, QAD QAD Explore 2012.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
High Productivity Computing Systems Robert Graybill DARPA/IPTO March 2003.
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
1 Information System Security Assurance Architecture A Proposed IEEE Standard for Managing Enterprise Risk February 7, 2005 Dr. Ron Ross Computer Security.
Energy Profiling And Analysis Of The HPC Challenge Benchmarks Scalable Performance Laboratory Department of Computer Science Virginia Tech Shuaiwen Song,
Business Analysis and Essential Competencies
Priority Research Direction (use one slide for each) Key challenges -Fault understanding (RAS), modeling, prediction -Fault isolation/confinement + local.
Erik P. DeBenedictis Sandia National Laboratories October 24-27, 2005 Workshop on the Frontiers of Extreme Computing Sandia is a multiprogram laboratory.
November 13, 2006 Performance Engineering Research Institute 1 Scientific Discovery through Advanced Computation Performance Engineering.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
Headquarters U. S. Air Force I n t e g r i t y - S e r v i c e - E x c e l l e n c e © 2008 The MITRE Corporation. All rights reserved From Throw Away.
IRM304 CDR Course Manager: Denny Involved Competency Leads: 26 (Cybersecurity)-Denman, 19 (Measurement)-Denny, 7 (DBS)-Corcoran [Capability Planning],
Profiling Memory Subsystem Performance in an Advanced POWER Virtualization Environment The prominent role of the memory hierarchy as one of the major bottlenecks.
Revitalizing High-End Computing – Progress Report July 14, 2004 Dave Nelson (NCO) with thanks to John Grosh (DoD)
Eliminating Silent Data Corruptions caused by Soft-Errors Siva Hari, Sarita Adve, Helia Naeimi, Pradeep Ramachandran, University of Illinois at Urbana-Champaign,
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
Software Working Group Chairman’s Note: This document was prepared by the “software and applications” working group and was received by the entire workshop.
2007/11/2 First French-Japanese PAAP Workshop 1 The FFTE Library and the HPC Challenge (HPCC) Benchmark Suite Daisuke Takahashi Center for Computational.
Slide-1 HPCchallenge Benchmarks MITRE ICL/UTK HPCS HPCchallenge Benchmark Suite David Koester, Ph.D. (MITRE) Jack Dongarra (UTK) Piotr Luszczek (ICL/UTK)
Haney - 1 HPEC 9/28/2004 MIT Lincoln Laboratory pMatlab Takes the HPCchallenge Ryan Haney, Hahn Kim, Andrew Funk, Jeremy Kepner, Charles Rader, Albert.
Fault Tolerance Benchmarking. 2 Owerview What is Benchmarking? What is Dependability? What is Dependability Benchmarking? What is the relation between.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
Business Analysis. Business Analysis Concepts Enterprise Analysis ► Identify business opportunities ► Understand the business strategy ► Identify Business.
HUSKY CONSULTANTS FRANKLIN VALENCIA WIOLETA MILCZAREK ANTHONY GAGLIARDI JR. BRIAN CONNERY.
Slide-1 Multicore Theory MIT Lincoln Laboratory Theory of Multicore Algorithms Jeremy Kepner and Nadya Bliss MIT Lincoln Laboratory HPEC 2008 This work.
Breakout Group: Debugging David E. Skinner and Wolfgang E. Nagel IESP Workshop 3, October, Tsukuba, Japan.
Barriers to Industry HPC Use or “Blue Collar” HPC as a Solution Presented by Stan Ahalt OSC Executive Director Presented to HPC Users Conference July 13,
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
Slide-1 Parallel MATLAB MIT Lincoln Laboratory Multicore Programming in pMatlab using Distributed Arrays Jeremy Kepner MIT Lincoln Laboratory This work.
HPEC HN 9/23/2009 MIT Lincoln Laboratory RAPID: A Rapid Prototyping Methodology for Embedded Systems Huy Nguyen, Thomas Anderson, Stuart Chen, Ford.
1 CREATING AND MANAGING CERT. 2 Internet Wonderful and Terrible “The wonderful thing about the Internet is that you’re connected to everyone else. The.
PTCN-HPEC-02-1 AIR 25Sept02 MIT Lincoln Laboratory Resource Management for Digital Signal Processing via Distributed Parallel Computing Albert I. Reuther.
Tackling I/O Issues 1 David Race 16 March 2010.
MIT Lincoln Laboratory 1 of 4 MAA 3/8/2016 Development of a Real-Time Parallel UHF SAR Image Processor Matthew Alexander, Michael Vai, Thomas Emberley,
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Design and Planning Tools John Grosh Lawrence Livermore National Laboratory April 2016.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
Name/Title of Your App Prepared by: …… For the 5 th National ICT Innovation Competition.
Advancing National Wireless Capability Date: March 22, 2016 Wireless Test Bed & Wireless National User Facility Paul Titus Department Manager, Communications.
Six-Sigma : DMAIC Cycle & Application
Albert I. Reuther & Joel Goodman HPEC Sept 2003
Identify the Risk of Not Doing BA
Matthew Lyon James Montgomery Lucas Scotta 13 October 2010
Quantifying Quality in DevOps
Financial Management Modernization Program
Sachiko A. Kuwabara, PhD, MA
KEY INITIATIVE Financial Data and Analytics
Presentation transcript:

Slide-1 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government. The Path to Extreme Supercomputing—LACSI Workshop DARPA HPCS David Koester, Ph.D. DARPA HPCS Productivity Team 12 October 2004 Santa Fe, NM

Slide-2 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory Outline Brief DARPA HPCS Overview –Impacts –Programmatics –HPCS Phase II Teams –Program Goals –Productivity Factors — Execution & Development Time –HPCS Productivity Team Benchmarking Working Group Panel Theme/Question –How much? –How fast?

Slide-3 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory High Productivity Computing Systems Impact: Performance (time-to-solution): speedup critical national security applications by a factor of 10X to 40X Programmability (idea-to-first-solution): reduce cost and time of developing application solutions Portability (transparency): insulate research and operational application software from system Robustness (reliability): apply all known techniques to protect against outside attacks, hardware faults, & programming errors Fill the Critical Technology and Capability Gap Today (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing) Fill the Critical Technology and Capability Gap Today (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing) Applications: Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling and biotechnology HPCS Program Focus Areas  Create a new generation of economically viable computing systems (2010) and a procurement methodology ( ) for the security/industrial community

Slide-4 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory High Productivity Computing Systems Phase 1Phase 2 ( ) Phase 3 ( ) Concept Study Advanced Design & Prototypes Full Scale Development Petascale/s Systems Vendors New Evaluation Framework Test Evaluation Framework  Create a new generation of economically viable computing systems (2010) and a procurement methodology ( ) for the security/industrial community Validated Procurement Evaluation Methodology -Program Overview- Productivity Team Half-Way Point Phase 2 Technology Assessment Review

Slide-5 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory HPCS Phase II Teams Mission Partners Productivity Team (Lincoln Lead) PI: Smith PI: ElnozahyPI: Mitchell MIT Lincoln Laboratory PI: Kepner PI: Lucas PI: Koester PI: Basili PI: Benson & Snavely PIs: Vetter, Lusk, Post, BaileyPIs: Gilbert, Edelman, Ahalt, Mitchell CSAIL Ohio State Industry PI: Dongarra

Slide-6 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory HPCS Program Goals Productivity Goals HPCS overall productivity goals: –Execution (sustained performance)  1 Petaflop/s (scalable to greater than 4 Petaflop/s)  Reference: Production workflow –Development  10X over today’s systems  Reference: Lone researcher and Enterprise workflows 10x improvement in time to first solution!

Slide-7 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory HPCS Program Goals Productivity Framework Productivity = Utility/Cost

Slide-8 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory HPCS Program Goals Hardware Challenges General purpose architecture capable of: Subsystem Performance Indicators 1)2+ PF/s LINPACK 2)6.5 PB/sec data STREAM bandwidth 3)3.2 PB/sec bisection bandwidth 4)64,000 GUPS

Slide-9 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory Productivity Factors Execution Time & Development Time Utility and some Costs are relative to –Workflow (WkFlow) –Execution Time (ExecTime) –Development Time (DevTime) Productivity = Utility/Cost Utility Cost Utility = f(WkFlow,ExecTime, DevTime) Utility Operating Costs Procurement Costs However, systems that will provide increased utility and decreased operating costs may have a higher initial procurement cost –Need productivity metrics to justify the higher initial cost Reductions in both Execution Time and Development Time contribute to positive decreases in operating costs –Reduction in programmer costs –More work performed over a period Reductions in both Execution Time and Development Time contribute to positive increases in Utility –Utility generally is inversely related to time –Quicker is better

Slide-10 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory HPCS Benchmark Spectrum 8 HPCchallenge Benchmarks Many (~40) Micro & Kernel Benchmarks 9 Simulation Applications Local DGEMM STREAM RandomAcces 1D FFT Global Linpack PTRANS RandomAccess 1D FFT Existing Applications Emerging Applications Future Applications Spanning Kernels Discrete Math … Graph Analysis … Linear Solvers … Signal Processing … Simulation … I/O Execution Bounds Execution Indicators 6 Scalable Compact Apps Pattern Matching Graph Analysis Simulation Signal Processing Purpose Benchmarks … Others … Development Indicators Several (~10) Small Scale Applications System Bounds Spectrum of benchmarks provide different views of system –HPCchallenge pushes spatial and temporal boundaries; sets performance bounds –Applications drive system issues; set legacy code performance bounds Kernels and Compact Apps for deeper analysis of execution and development time Spectrum of benchmarks provide different views of system –HPCchallenge pushes spatial and temporal boundaries; sets performance bounds –Applications drive system issues; set legacy code performance bounds Kernels and Compact Apps for deeper analysis of execution and development time Current UM2000 GAMESS OVERFLOW LBMHD RFCTH HYCOM Near-Future NWChem ALEGRA CCSM

Slide-11 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory Panel Theme/Question “How much should we change supercomputing to enable the applications that are important to us, and how fast?” How much? — HPCS is intended to “Fill the Critical Technology and Capability Gap between Today’s (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing) How fast? –Meaning when — HPCS SN-001 in 2010 –Meaning performance — Petascale/s sustained I’m here to listen to you — the HPCS Mission Partners –Scope out emerging and future applications for delivery (What applications will be important to you?) –Collect data for the HPCS Vendors on future  Applications  Kernels  Application characterizations and models

Slide-12 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory Statements Moore’s Law cannot go on forever Proof: 2 x → x→ ∞ ∞ So what? Moore’s Law doesn’t matter as long as we need to invest the increase in transistors into machine state — i.e., overhead — instead of real use