Q: What Does the Future Hold for “Parallel” Languages?

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University.
Thoughts on Shared Caches Jeff Odom University of Maryland.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
History of Distributed Systems Joseph Cordina
Technical Architectures
Parallel Computing Overview CS 524 – High-Performance Computing.
Software Engineering and Middleware: a Roadmap by Wolfgang Emmerich Ebru Dincel Sahitya Gupta.
Parallel Programming Models and Paradigms
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Research and Technology Advances in Systems Software for Emerging Computer Systems EDGE.
Contemporary Languages in Parallel Computing Raymond Hummel.
Parallel Architectures
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Darema Dr. Frederica Darema NSF Dynamic Data Driven Application Systems (Symbiotic Measurement&Simulation Systems) “A new paradigm for application simulations.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Compiler, Languages, and Libraries ECE Dept., University of Tehran Parallel Processing Course Seminar Hadi Esmaeilzadeh
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
© 2010 IBM Corporation Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems Gabor Dozsa 1, Sameer Kumar 1, Pavan Balaji 2,
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Master Program (Laurea Magistrale) in Computer Science and Networking High Performance Computing Systems and Enabling Platforms Marco Vanneschi 1. Prerequisites.
Case Study in Computational Science & Engineering - Lecture 2 1 Parallel Architecture Models Shared Memory –Dual/Quad Pentium, Cray T90, IBM Power3 Node.
Panel on: DEFINING the “GRID” Moderator: Frederica Darema, NSF Panelists: Fabrizio Gagliardi, Microsoft Ian Foster, ANL & UofChicago Geoffrey Fox, Indiana.
1 Qualifying ExamWei Chen Unified Parallel C (UPC) and the Berkeley UPC Compiler Wei Chen the Berkeley UPC Group 3/11/07.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Programmability Hiroshi Nakashima Thomas Sterling.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Background Computer System Architectures Computer System Software.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
These slides are based on the book:
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Clouds , Grids and Clusters
Alternatives to Mobile Agents
CS5102 High Performance Computer Systems Thread-Level Parallelism
Architecting Web Services
Conception of parallel algorithms
Architecting Web Services
Dynamic Data Driven Application Systems
Programming Models for SimMillennium
Parallel Algorithm Design
University of Technology
Distributed System Concepts and Architectures
Ch > 28.4.
Guoliang Chen Parallel Computing Guoliang Chen
Chapter 17: Database System Architectures
Dynamic Data Driven Application Systems
Milind A. Bhandarkar Adaptive MPI Milind A. Bhandarkar
Compiler Back End Panel
The Globus Toolkit™: Information Services
Compiler Back End Panel
Hybrid Programming with OpenMP and MPI
HPC User Forum: Back-End Compiler Technology Panel
Database System Architectures
Research: Past, Present and Future
Presentation transcript:

Q: What Does the Future Hold for “Parallel” Languages? A: Examine History Frederica Darema Senior Science and Technology Advisor Director, Next Generation Software Program NSF

Application and Platform Directions Applications are becoming: more complex, multi-modal aspects of the application system, multiple developers, multiple languages, application modules distributed, data distributed Platforms in the past: Vector Processors, SIMD MPPs,Distributed Memory MPs, and Shared Memory MPs are evolving into: Distributed Platforms:GRIDS and GiBS (Grids-in-a-Box) Heterogeneous Computers and Networks architecture (compute &network) and node power (supernodes, PCs) Multiple levels of memory hierarchy, with Latencies: variable (internode, intranode) Bandwidths: different for different links; different based on traffic

The Emergence of Grid Computing in the 90’s coordinated problem solving on dynamic and heterogeneous resource assemblies DATA ACQUISITION ADVANCED VISUALIZATION ,ANALYSIS COMPUTATIONAL RESOURCES IMAGING INSTRUMENTS LARGE-SCALE DATABASES Example: “Telescience Grid”, Courtesy of Ellisman & Berman /UCSD&NPACI

Adaptive Sofware Project for Fluids and Crack Propagation Pipe Workflow MiniCAD Surface Mesher Surface Mesht Generalized Mesher Modelt JMesh T4 Solid Mesht Fluid Mesht Mechanical Tst/Pst Fluid/Thermo T4T10 Client: Crack Initiation T10 Solid Mesht Initial Flaw Params Crack Insertion Dispst Modelt+1 Fracture Mechanics Growth Params1 Crack Extension Viz

Programming Parallel Computers In ’83 the SPMD Model (private+shared memory machines) Some used it as Data Parallel Model, SPMD more general Mid-late 80’s several “message-passing” machines Parallel Computing Forum, IBM Parallel Fortran Early 90’s Scalable MPPs: PVM, MPI, and multitude of implementations thereof HPF Later also DSM and SMP Clusters Threads, Titanium, Split-C, UPC, GA, Co-Array Fortran, Earth, OpenMP, Charm++, STAPL, … PVM SPMD based models

Parallel and Distributed Computing Dynamic Analysis Situation Platform Programming Model Constraint Message passing Static partition Shared queue Dynamic allocation of work Message-passing across SMPs Inefficient load-balancing Application “re-write” required within SMP Network of Workstations (NOW) Symmetric Multiprocessor (SMP) Launch Application(s) Cluster of SMPs Application cannot be repartitioned dynamically, mapped optimally, when resources change Distributed Computing Resources Globus, Legion, Condor, Harness, … Adaptable Systems Infrastructure Distributed Platform MPP NOW SAR tac-com base data cntl fire alg accelerator SP ….

The NGS Program developsTechnology for integrated feedback & control Runtime Compiling System (RCS) and Dynamic Application Composition Dynamic Analysis Situation Application Model Distributed Programming Model Application Program Compiler Front-End Application Intermediate Representation Compiler Back-End Launch Application (s) Performance Measuremetns & Models Dynamically Link & Execute Application Components & Frameworks Distributed Computing Resources Distributed Platform computing Systems Adaptable Infrastructure MPP NOW SAR tac-com base data cntl fire alg accelerator SP ….

Models are needed, that Will enable the RCS (runtime compiling system) to map applications, without: Requiring detailed resource management specifications by the user Requiring specification of data location and execution Advanced concepts of dynamic adaptive resource management Decoupled execution and data location/placement Memory consistency models Multithreaded hierarchical concurrency Combination of language and library models / hybrid “Active Programming”, and “Active data distribution”

Application Programming Models Applications will increasingly be multi-component some parts fine-grain, some parts coarse-grain concurrency different algorithms – different sets of processor resources; leaving this burden to the user exclusively– not desirable for specific algorithms high-level systems (e.g. Netsolve) provide portability and optimized mapping Will continue to pursue models Shared Memory Message Passing Remote Procedure Call (RPCs) Hybrid and Higher Level Models Embodied in Library based implementations Language extension based implementations New/Advanced programming models

Programming Environments Procedural - > Model Based Programming -> Composition Custom Structures -> Customizable Structures (patterns, templates) Libraries -> Frameworks -> Compositional Systems (Knowledge Based Systems) Application Composition Frameworks and Interoperability extended to include measurements Data Models and Data Management Extend the notion of Data Exchange Standards (Applications and Measurements)

The Role of the Applications Need to include applications to validate the effectiveness of the new models and environments Experience with using the new models in applications and platforms will guide the model evolution The effort should be coupled with advances in compiler technology and performance analysis tools

Summary Need programming models that Will shorten distributed application development time Make parallel/concurrent/distributed algorithms easier to express Improve portability, application efficiency across platforms SPMD still remains useful, … but there are many challenges and opportunities Further advances needed on programming models compilers libraries integrating these software components into an application development and runtime support system, … and don’t forget performance and efficiency!!! NGS fosters research in these directions; and specifically calls for proposals on programming models NGS Program www.cise.nsf.gov (Program Announcements)