High Productivity Computing Systems Robert Graybill DARPA/IPTO March 2003.

Slides:



Advertisements
Similar presentations
New Directions for DARPA ISAT Ad Hoc Working Group on DARPA Futures Initial Draft: July Update: July x 2 Beyond Strategic Computing:
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Test Automation Success: Choosing the Right People & Process
Priority Research Direction Key challenges General Evaluation of current algorithms Evaluation of use of algorithms in Applications Application of “standard”
Sensors and location technologies – the front end of ISR
Software Testing By Marcin Starzomski :P. What is Testing ? Testing is a process used to help identify the correctness, completeness and quality of developed.
IDA Defense Science Study Group 11/99Mark D. Hill, University of Wisconsin-Madison Exploiting Market Realities to Address National Security’s High-Performance.
Advancing Alternative Energy Technologies Glenn MacDonell Director, Energy Industry Canada Workshop on Alternatives to Conventional Generation Technologies.
Advances in Modeling and Simulation for US Nuclear Stockpile Stewardship February 2, 2009 James S. Peery Director Computers, Computation, Informatics and.
Sixth Edition 1 M a n a g e m e n t I n f o r m a t i o n S y s t e m s M a n a g I n g I n f o r m a t i o n T e c h n o l o g y i n t h e E – B u s i.
Smart Grid - Cyber Security Small Rural Electric George Gamble Black & Veatch
Systems Engineering in a System of Systems Context
Visual Solution to High Performance Computing Computer and Automation Research Institute Laboratory of Parallel and Distributed Systems
Hackystat and the DARPA High Productivity Computing Systems Program Philip Johnson University of Hawaii.
RBG 6/9/ Reactive Processing Jon Hiller Science and Technology Associates, Inc. For Dr. William Harrod DARPA IPTO 24 May 2005.
6/14/2015 How to measure Multi- Instruction, Multi-Core Processor Performance using Simulation Deepak Shankar Darryl Koivisto Mirabilis Design Inc.
© Prentice Hall CHAPTER 9 Application Development by Information Systems Professionals.
Eleventh Edition 1 Introduction to Information Systems Essentials for the Internetworked E-Business Enterprise Irwin/McGraw-Hill Copyright © 2002, The.
Chapter 12 Strategies for Managing the Technology Infrastructure.
Chapter 6 Systems Development.
© Copyright Richard W. Selby and Northrop Grumman Corporation. All rights reserved. 0 Systems of Systems Acquisition and Management Critical Success.
Introduction to Systems Analysis and Design
Leveling the Field for Multicore Open Systems Architectures Markus Levy President, EEMBC President, Multicore Association.
Privileged and Confidential Strategic Approach to Asset Management Presented to October Urban Water Council Regional Seminar.
Mantova 18/10/2002 "A Roadmap to New Product Development" Supporting Innovation Through The NPD Process and the Creation of Spin-off Companies.
Achieving Better Reliability With Software Reliability Engineering Russel D’Souza Russel D’Souza.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
VTT-STUK assessment method for safety evaluation of safety-critical computer based systems - application in BE-SECBS project.
System Development Process Prof. Sujata Rao. 2Overview Systems development life cycle (SDLC) – Provides overall framework for managing system development.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
ESA/ESTEC, TEC-QQS August 8, 2005 SAS_05_ESA SW PA R&D_Winzer,Prades Slide 1 Software Product Assurance (PA) R&D Road mapping Activities ESA/ESTEC TEC-QQS.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Performance Model & Tools Summary Hung-Hsun Su UPC Group, HCS lab 2/5/2004.
Systems Engineering In Aerospace Theodora Saunders February AUTOMATION IN MANUFACTURING Leading-Edge Technologies and Application Fairfield University.
Headquarters U. S. Air Force I n t e g r i t y - S e r v i c e - E x c e l l e n c e © 2008 The MITRE Corporation. All rights reserved From Throw Away.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution Structured programming Product SW.
© 2012 xtUML.org Bill Chown – Mentor Graphics Model Driven Engineering.
Revitalizing High-End Computing – Progress Report July 14, 2004 Dave Nelson (NCO) with thanks to John Grosh (DoD)
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Slide-1 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001.
Fifth Lecture Hour 9:30 – 10:20 am, September 9, 2001 Framework for a Software Management Process – Life Cycle Phases (Part II, Chapter 5 of Royce’ book)
1 Lecture #1: PD - Ch 1. Introduction Ref: Product Design and Development by Karl T. Ulrich and Steven D. Eppinger, McGRAW-Hill
Slide-1 HPCchallenge Benchmarks MITRE ICL/UTK HPCS HPCchallenge Benchmark Suite David Koester, Ph.D. (MITRE) Jack Dongarra (UTK) Piotr Luszczek (ICL/UTK)
Computer Concepts 2014 Chapter 10 Information Systems Analysis and Design.
Last Updated 1/17/02 1 Business Drivers Guiding Portal Evolution Portals Integrate web-based systems to increase productivity and reduce.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Fault Tolerance Benchmarking. 2 Owerview What is Benchmarking? What is Dependability? What is Dependability Benchmarking? What is the relation between.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
Estimating “Size” of Software There are many ways to estimate the volume or size of software. ( understanding requirements is key to this activity ) –We.
Outline Why this subject? What is High Performance Computing?
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Chapter 1 Basic Concepts of Operating Systems Introduction Software A program is a sequence of instructions that enables the computer to carry.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Illuminating Britelite’s Internal Services for Success Strategy for Process Improvement.
Tackling I/O Issues 1 David Race 16 March 2010.
December 13, G raphical A symmetric P rocessing Prototype Presentation December 13, 2004.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
OUTCOMES OBJECTIVES FUNCTIONS ACTIONS TERRITORIES LOCATIONS MARKET SEGMENTS TIME LINESCHALLENGE IMPACT RESOURCESACTIVITIESCHANNELS RELATIONS PARTNERS CUSTOMERS.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
Red Hat, Inc. The Revolution of Choice. Red Hat, Inc. Founded in 1995 –Bob Young, CEO - Co-founder –Marc Ewing, CTO - Co-founder Headquartered in Research.
Project Cost Management
“Existing world order is being redefined.” Henry Kissinger Jan 2015
Class 7 – Inception Phase: Steps & techniques
Digital Government Initiative Initiation Department of Information Technology Estevan Lujan, Acting Cabinet Secretary Susan Pentecost, Managing Director,
Project Readiness Review P10029 – Air Muscle Artificial Limb
Presentation transcript:

High Productivity Computing Systems Robert Graybill DARPA/IPTO March 2003

RBG 10/9/ High Productivity Computing Systems Goal:  Provide a new generation of economically viable high productivity computing systems for the national security and industrial user community (2007 – 2010) Impact: Performance (time-to-solution): speedup critical national security applications by a factor of 10X to 40X Programmability (time-for-idea-to-first-solution): reduce cost and time of developing application solutions Portability (transparency): insulate research and operational application software from system Robustness (reliability): apply all known techniques to protect against outside attacks, hardware faults, & programming errors Fill the Critical Technology and Capability Gap Today (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing) Fill the Critical Technology and Capability Gap Today (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing) Applications: Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling and biotechnology HPCS Program Focus Areas

RBG 10/9/ Fill the high-end computing technology and capability gap for critical national security missions Tightly Coupled Parallel Systems 2010 High-End Computing Solutions New Goal: Double Value Every 18 Months Moore’s Law Double Raw Performance every 18 Months Vision: Focus on the Lost Dimension of HPC – “User & System Efficiency and Productivity” Commodity HPCs Vector Parallel Vector Systems 1980’s Technology

RBG 10/9/ HPCS Technical Considerations Microprocessor Shared-Memory Multi-Processing Distributed-Memory Multi-Computing “MPI” Custom Vector Architecture Types Communication Programming Models Symmetric Multiprocessors Distributed Shared Memory Parallel Vector Commodity HPC Vector Supercomputer Scalable Vector Massively Parallel Processors Commodity Clusters, Grids Single Point Design Solutions are no longer Acceptable HPCS Focus Tailorable Balanced Solutions Performance Characterization & Precision Programming Models Hardware Technology Software Technology System Architecture

RBG 10/9/ HPCS Program Phases I - III Products Metrics, Benchmarks Academia Research Platforms Early Software Tools Early Pilot Platforms Phase II R&D Phase III Full Scale Development Metrics and Benchmarks System Design Review Industry Application Analysis Performance Assessment HPCS Capability or Products Fiscal Year Concept Reviews PDR Research Prototypes & Pilot Systems Phase III Readiness Review Technology Assessments Requirements and Metrics Phase II Readiness Reviews Phase I Industry Concept Study Reviews Industry Procurements Critical Program Milestones DDR Industry Evolutionary Development Cycle

RBG 10/9/ HPCS Phase I Industry Teams Industry: Application Analysis/Performance Assessment Team: Cray, Inc. (Burton Smith) Hewlett-Packard Company (Kathy Wheeler) International Business Machines Corporation (Mootaz Elnozahy) Silicon Graphics, Inc. (Steve Miller) Sun Microsystems, Inc. (Jeff Rulifson) MIT Lincoln Laboratory

RBG 10/9/ Application Analysis/ Performance Assessment Activity Flow Productivity Ratio of Utility/Cost Metrics - Development time (cost) - Execution time (cost) Implicit Factors DDR&E & IHEC Mission Analysis HPCS Applications 1. Cryptanalysis 2. Signal and Image Processing 3. Operational Weather 4. Nuclear Stockpile Stewardship 5. Etc. Common Critical Kernels Participants HPCS Technology Drivers Define System Requirements and Characteristics Compact Applications Applications Application AnalysisBenchmarks & MetricsImpacts Mission Partners: DOD DOE NNSA NSA NRO Participants: Cray HP IBM SGI Sun DARPA HPCS Program Motivation Inputs Mission Partners Improved Mission Capability Mission-Specific Roadmap Mission Work Flows

RBG 10/9/ Application Focus Selection Operational weather and ocean forecasting Planning activities for dispersion of airborne/waterborne contaminants Cryptanalysis Intelligence, surveillance, reconnaissance Improved armor design Engineering design of large aircraft, ship and structures National missile defense Test and evaluation Weapon (warheads and penetrators) Survivability/stealth design Comprehensive Aerospace Vehicle Design Signals Intelligence (Crypt) Signals Intelligence (Graph) Operational Weather/Ocean Forecasting Stealthy Ship Design Nuclear Weapons Stockpile Stewardship Signal and Image Processing Army Future Combat Systems Electromagnetic Weapons Development Geospatial Intelligence Threat Weapon Systems Characterization DDR&E Study IHEC Study Bioscience

RBG 10/9/ Computational Biology: from Sequence to Systems Biomedical Computing Requirements Sequence Genome Assemble Genome Find the Genes Annotate the Genes Map Genes to Proteins Protein-Protein Interactions Pathways: Normal & Aberrant Protein Functions in Pathways Protein Structure Identify Drug Targets Cellular Response Tissue, Organ & Whole Body Response TeraOps Trivially Parallel Peta-Scale Computing Slide provided by IDC

RBG 10/9/ HPCS Mission Work Flows Decide Observe Act Orient Production Hours to Minutes (Response Time) Design Simulation Visualize Enterprise Months to days Overall Cycle Development Cycle Optimize Scale Test Development Years to months Months to days Code Design Prototyping Evaluation Operation Maintenance Design Code Test Port, Scale, Optimize Initial Development Days to hours Experiment Theory Code Test Design Prototyping Hours to minutes HPCS Productivity Factors: Performance, Programmability, Portability, and Robustness are very closely coupled with each work flow HPCS Productivity Factors: Performance, Programmability, Portability, and Robustness are very closely coupled with each work flow Researcher Execution Development Initial Product Development Port Legacy Software

RBG 10/9/ Workflow Priorities & Goals Implicit Productivity Factors WorkflowPerf.Prog.Port.Robust. ResearcherHigh EnterpriseHighHighHighHigh ProductionHighHigh Productivity Problem Size Workstation Cluster HPCS HPCS Goal Researcher Production Enterprise Workflows define scope of customer priorities Activity and Purpose benchmarks will be used to measure Productivity HPCS Goal is to add value to each workflow –Increase productivity while increasing problem size Workflows define scope of customer priorities Activity and Purpose benchmarks will be used to measure Productivity HPCS Goal is to add value to each workflow –Increase productivity while increasing problem size Mission Needs System Requirements

RBG 10/9/ HPCS Productivity Framework Development Time (cost) Execution Time (cost) Productivity Metrics System Parameters (Examples) BW bytes/flop Memory latency Memory size …….. Productivity (Ratio of Utility/Cost) Processor flop/cycle Bisection BW Total Connections ……… Size (cuft) Power/rack Facility operation ………. Code size Restart time (reliability) Code Optimization time ……… Implicit HPCS Productivity Factors: Performance, Programmability, Portability, and Robustness Implicit HPCS Productivity Factors: Performance, Programmability, Portability, and Robustness Activity & Purpose Benchmarks Actual System or Model Work Flow

RBG 10/9/ HPC Community Reactions DoD and DOE User Communities –Active participation in reviews –Providing challenge problems –Linking with internal efforts –Providing funding synergism Industry –Finally an opportunity to develop a non evolutionary vision –Active program support (technical, personnel, vision) –Direct impact to future product roadmaps University –Active support for Phase 1 (2X growth from proposals) Extended Community –HPCS strategy embedded in Congressional IHEC Report Productivity a new HPC Sub-discipline