Computer Science and Computational Science Sampath Kannan, Division Director Computing & Communication Foundations Division National Science Foundation.

Slides:



Advertisements
Similar presentations
Developed by Reneta Barneva, SUNY Fredonia
Advertisements

Supporting Research on Campus - Using Cyberinfrastructure (CI) Public research use of ICT has rapidly increased in the past decade, requiring high performance.
Technology Drivers Traditional HPC application drivers – OS noise, resource monitoring and management, memory footprint – Complexity of resources to be.
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
High-Performance Computing
© Chinese University, CSE Dept. Software Engineering / Software Engineering Topic 1: Software Engineering: A Preview Your Name: ____________________.
FUTURE TECHNOLOGIES Lecture 13.  In this lecture we will discuss some of the important technologies of the future  Autonomic Computing  Cloud Computing.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
Funding Opportunities at NSF Jane Silverthorne International Arabidopsis Consortium Workshop January 15, 2011.
Prof. Srinidhi Varadarajan Director Center for High-End Computing Systems.
Life and Health Sciences Summary Report. “Bench to Bedside” coverage Participants with very broad spectrum of expertise bridging all scales –From molecule.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Chapter1 Fundamental of Computer Design Dr. Bernard Chen Ph.D. University of Central Arkansas.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
Cyberinfrastructure: Initiatives at the US National Science Foundation Stephen Nash Program Director, Operations Research U.S. National Science Foundation.
Knowledge Environments for Science and Engineering: Overview of Past, Present and Future Michael Pazzani, Information and Intelligent Systems Division,
Computational Thinking Related Efforts. CS Principles – Big Ideas  Computing is a creative human activity that engenders innovation and promotes exploration.
SKA-cba-ase NSF and Science of Design Avogadro Scale Engineering Center for Bits & Atoms November 18-19, 2003 Kamal Abdali Computing & Communication.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Chapter1 Fundamental of Computer Design Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2010.
N Tropy: A Framework for Analyzing Massive Astrophysical Datasets Harnessing the Power of Parallel Grid Resources for Astrophysical Data Analysis Jeffrey.
Some Thoughts on Sensor Network Research Krishna Kant Program Director National Science Foundation CNS/CSR Program.
The FY 2009 Budget Thomas N. Cooley, NSF Council of Colleges of Arts and Sciences March 13, 2008.
I-SPAN’05 December 07, Process Scheduling for the Parallel Desktop Designing Parallel Operating Systems using Modern Interconnects Process Scheduling.
May 28, 2009Great Plains Network, Kansas City, MO1 Cyber Advancing Research … Research Advancing Cyber Scott F. Midkiff  :  :
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
DOE 2000, March 8, 1999 The IT 2 Initiative and NSF Stephen Elbert program director NSF/CISE/ACIR/PACI.
Revitalizing High-End Computing – Progress Report July 14, 2004 Dave Nelson (NCO) with thanks to John Grosh (DoD)
1 Software Engineering Ian Sommerville th edition Instructor: Mrs. Eman ElAjrami University Of Palestine.
S2I2: Enabling grand challenge data intensive problems using future computing platforms Project Manager: Shel Swenson (USC & GATech)
Experts in numerical algorithms and HPC services Compiler Requirements and Directions Rob Meyer September 10, 2009.
Introduction to Research 2011 Introduction to Research 2011 Ashok Srinivasan Florida State University Images from ORNL, IBM, NVIDIA.
The Nature of Computing INEL 4206 – Microprocessors Lecture 2 Bienvenido Vélez Ph. D. School of Engineering University of Puerto Rico - Mayagüez.
Interoperability from the e-Science Perspective Yannis Ioannidis Univ. Of Athens and ATHENA Research Center
GEOSCIENCE NEEDS & CHALLENGES Dogan Seber San Diego Supercomputer Center University of California, San Diego, USA.
Marv Adams Chief Information Officer November 29, 2001.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Challenges of Coping with Funding and Data Management in a Changing World Rick Lyons Director Infectious Disease Research Center.
Barriers to Industry HPC Use or “Blue Collar” HPC as a Solution Presented by Stan Ahalt OSC Executive Director Presented to HPC Users Conference July 13,
Applications and Requirements for Scientific Workflow May NSF Geoffrey Fox Indiana University.
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
1 Cyber-Enabled Discovery and Innovation Michael Foster May 11, 2007.
A WEB-ENABLED APPROACH FOR GENERATING DATA PROCESSORS University of Nevada Reno Department of Computer Science & Engineering Jigar Patel Sergiu M. Dascalu.
Applications and Requirements for Scientific Workflow May NSF Geoffrey Fox Indiana University.
Lawrence Livermore National Laboratory 1 Science & Technology Principal Directorate - Computation Directorate Scalable Fault Tolerance for Petascale Systems.
Tackling I/O Issues 1 David Race 16 March 2010.
High Risk 1. Ensure productive use of GRID computing through participation of biologists to shape the development of the GRID. 2. Develop user-friendly.
Keith A. Marzullo, Ph.D. CISE/CNS DD February 24, 2011 Welcome to the Directorate for Computer and Information Science and Engineering.
A WEB-ENABLED APPROACH FOR GENERATING DATA PROCESSORS University of Nevada Reno Department of Computer Science & Engineering Jigar Patel Sohei Okamoto.
Data Infrastructure Building Blocks (DIBBS) NSF Solicitation Webinar -- March 3, 2016 Amy Walton, Program Director Advanced Cyberinfrastructure.
Unrestricted. © Siemens AG All rights reserved. Open Innovation 2.0 Dr. Walter Weigel VP External Cooperations Corporate Technology I Dublin, June.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Societal applications of large scalable parallel computing systems ARTEMIS & ITEA Co-summit, Madrid, October 30th 2009.
Design and Planning Tools John Grosh Lawrence Livermore National Laboratory April 2016.
RDA US Science workshop Arlington VA, Aug 2014 Cees de Laat with many slides from Ed Seidel/Rob Pennington.
Joslynn Lee – Data Science Educator
FET Plans FET - Proactive 1.
for the Offline and Computing groups
Clouds from FutureGrid’s Perspective
Presentation transcript:

Computer Science and Computational Science Sampath Kannan, Division Director Computing & Communication Foundations Division National Science Foundation

Outline Need for new technology Challenges from the new technology Bridging the two disciplines NSF/CISE Programs

3 The Challenge: “a right hand turn in Moore’s Law growth” DigitalMedia/43264A_hi_res.jpg AMD Phenom g Intel Woodcrest Single Thread Performance “right hand turn” ascribed to P. Otellini, Intel

Big Scientific Problems  Understanding oceans, atmosphere, climate:  more sensors for better accuracy -> more data  Coupled systems -> more complex computation  Biology and medicine:  Biology generating lots of data – per individual not per species; 2) metagenomics  Smart health: Personalized, ubiquitous health care; telemedicine, telepresence  Astrophysics, cosmology  … and many others

5 Data Deluge: WSJ Aug 28, 2009  Never have so many people generated so much digital data or been able to lose so much of it so quickly, experts at the San Diego Supercomputer Center say  Computer users world-wide generate enough digital data every 15 minutes to fill the U.S. Library of Congress  More technical data have been collected in the past year alone than in all previous years since science began, says Johns Hopkins astrophysicist Alexander Szalay  The problem is forcing historians to become scientists, and scientists to become archivists and curators

Challenges  Hardware  Middleware I/O, Storage, …  Software  Abstractions and formal reasoning  Algorithms  Power/Energy  Resilience to faults

Variety of Hardware Platforms  Multicore, many core:  How many? How heterogeneous?  What interconnects? What memory hierarchy?  Non-silicon: bio, nano, quantum Even if applications can be designed for just one of these… computer science demands one (or a few) programming models.

Middleware, I/O Storage  Better Distributed Operating Systems  Better compilers (automatic parallelism detection, optimization, etc.)  Better I/O and intelligent storage systems … should lead to … EASIER PROGRAMMING MODELS

Software  Need good programming models  Need multiple levels of abstraction for  Expert programmers  Non-experts  Tools for reasoning about correctness and other properties  Tools and middleware that allow portability

Energy/Power Efficiency is Critical  Power is bottleneck for HPC systems  Current systems consume 10’s of MWs of power  Costs to operate may be prohibitive  Power needed to cool a system approaches the power consumed by the system  System failure rate doubles for every 10° C rise in temperature  Reducing energy footprint of IT is important goal 10

Fault resilience  Not acceptable to deal with faults by hardware replication  Expose faults to as high a layer as possible and find robust computing solutions by combination of software and hardware approaches

Computational vs Computer Science  Computational Science Goal and Approach:  Solve important scientific problems of ever increasing scale  Ok if codes are designed for specific platform and application  A few standard Simulators and Equation Solvers slightly customized for application and platform

What Computer Science would like  Problems specify what should be computed… not how it should be computed… to allow algorithmic and implementation ingenuity  Use good, existing software engineering ideas… and seek new ones appropriate for application  Solve the challenges in the earlier slides, so that a more generic infrastructure is created for hardware and software layers in HPC

What Computer Scientists Should Do  Be a more dependable partner – provide software and tools that are maintained and evolved as needed  Understand the domain science issues  Appreciate the importance of specific applications  Appreciate the importance of computing and data as the 3 rd and 4 th paradigms of science… and the responsibility this gives them

CISE Programs - Core  Software + Hardware Foundations (≈ $40 – 50M /per year) supports  High Performance Computing  Compilers  Programming Languages  Formal Methods  Computer Architecture  Nanocomputing  Design Automation

Other CISE Programs  Computing Research Infrastructure (CRI) … recognizes that software is infrastructure  Expeditions in Computing: Our program for bold, ambitious, collaborative research: Upto 3 5-year projects per year, each funded at $10M.

Programs with OCI – 1) HECURA  Competitions in FY ’06, ‘08, ’09:  NSF (CISE+OCI), DARPA, DoE  I/O, File Systems, Compilers, Programming Models, Compilers  $10 – 15M each year  Not sure when the next competition will be

2) PetaApps  Develop the future simulation, optimization and analysis tools that use emerging petascale computing  Will advance frontiers of research in science and engineering with a high likelihood of enabling transformative research  Areas examined include: - Climate Change -Earthquake Dynamics -Storm Surge Models -Supernovae simulations

3) Software Institutes for Sustained Innovation Creating, maintaining, and evolving software for scientific computing OCI is lead; CISE + Other Directorates participate Current competition has small awards only Workshops sought this year to lay groundwork for large, “Institute” awards in future years

Cyber-Enabled Discovery and Innovation (CDI)  3 rd year of competition ≈$100 M each year  Agency-wide  Supports projects that advance  Two or more disciplines  Use of computational thinking  Many supported projects are in the area of scientific computing

Conclusion  CISE perspective guided by belief that:  Today’s High-Performance Computer is tomorrow’s general-purpose computer  We must keep developing general ideas that will allow for efficacious use of such machines broadly  We cannot predict where the need for these machines will be greatest  But today’s science applications are clearly pressing and important