Download presentation
Presentation is loading. Please wait.
Published byJoanna Maryann Blankenship Modified over 9 years ago
1
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Slides Courtesy Michael J. Quinn Parallel Programming in C with MPI and OpenMP Introduction to Parallel Computing Suhail Rehman MS by Research, IIIT-H rehman@research.iiit.ac.in
2
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Course Structure 4 Classes 4 Classes Quick Introduction to Parallel Computing Quick Introduction to Parallel Computing Covering Theory Today Covering Theory Today Hands-On Programming Hands-On Programming Hopefully will give you a 'flavor' of the Field Hopefully will give you a 'flavor' of the Field Mainly concentrating on MPI Mainly concentrating on MPI
3
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. References Parallel Computing in MPI and OpenMP by M. Quinn Parallel Computing in MPI and OpenMP by M. Quinn Copies in Library Copies in Library Toolset is MPIch2 and/or openMPI Toolset is MPIch2 and/or openMPI Intel Multicore Lab (223A) Intel Multicore Lab (223A) Contact me at rehman@research.iiit.ac.in Contact me at rehman@research.iiit.ac.in
4
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 1 Motivation and History
5
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Outline Motivation Motivation Modern scientific method Modern scientific method Evolution of supercomputing Evolution of supercomputing Modern parallel computers Modern parallel computers Seeking concurrency Seeking concurrency Data clustering case study Data clustering case study Programming parallel computers Programming parallel computers
6
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Why Faster Computers? Solve compute-intensive problems faster Solve compute-intensive problems faster Make infeasible problems feasible Reduce design time Solve larger problems in same amount of time Solve larger problems in same amount of time Improve answer’s precision Reduce design time Gain competitive advantage Gain competitive advantage
7
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Definitions Parallel computing Parallel computing Using parallel computer to solve single problems faster Parallel computer Parallel computer Multiple-processor system supporting parallel programming Parallel programming Parallel programming Programming in a language that supports concurrency explicitly
8
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Why MPI? MPI = “Message Passing Interface” MPI = “Message Passing Interface” Standard specification for message-passing libraries Standard specification for message-passing libraries Libraries available on virtually all parallel computers Libraries available on virtually all parallel computers Free libraries also available for networks of workstations or commodity clusters Free libraries also available for networks of workstations or commodity clusters
9
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Why OpenMP? OpenMP an application programming interface (API) for shared-memory systems OpenMP an application programming interface (API) for shared-memory systems Supports higher performance parallel programming of symmetrical multiprocessors Supports higher performance parallel programming of symmetrical multiprocessors
10
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Classical Science Nature Observation Theory Physical Experimentation
11
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Modern Scientific Method Nature Observation Theory Physical Experimentation Numerical Simulation
12
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Evolution of Supercomputing World War II World War II Hand-computed artillery tables Need to speed computations ENIAC Cold War Cold War Nuclear weapon design Intelligence gathering Code-breaking
13
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Supercomputer General-purpose computer General-purpose computer Solves individual problems at high speeds, compared with contemporary systems Solves individual problems at high speeds, compared with contemporary systems Typically costs $10 million or more Typically costs $10 million or more Traditionally found in government labs Traditionally found in government labs
14
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Commercial Supercomputing Started in capital-intensive industries Started in capital-intensive industries Petroleum exploration Automobile manufacturing Other companies followed suit Other companies followed suit Pharmaceutical design Consumer products
15
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 50 Years of Speed Increases ENIAC 350 flops Today > 1 trillion flops
16
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. CPUs 1 Million Times Faster Faster clock speeds Faster clock speeds Greater system concurrency Greater system concurrency Multiple functional units Concurrent instruction execution Speculative instruction execution
17
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Systems 1 Billion Times Faster Processors are 1 million times faster Processors are 1 million times faster Combine thousands of processors Combine thousands of processors Parallel computer Parallel computer Multiple processors Supports parallel programming Parallel computing = Using a parallel computer to execute a program faster Parallel computing = Using a parallel computer to execute a program faster
18
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Modern Parallel Computers Caltech’s Cosmic Cube (Seitz and Fox) Caltech’s Cosmic Cube (Seitz and Fox) Commercial copy-cats Commercial copy-cats nCUBE Corporation Intel’s Supercomputer Systems Division Lots more Thinking Machines Corporation Thinking Machines Corporation
19
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Commercial Parallel Systems Relatively costly per processor Relatively costly per processor Primitive programming environments Primitive programming environments Focus on commercial sales Focus on commercial sales Scientists looked for alternative Scientists looked for alternative
20
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Beowulf Concept NASA (Sterling and Becker) NASA (Sterling and Becker) Commodity processors Commodity processors Commodity interconnect Commodity interconnect Linux operating system Linux operating system Message Passing Interface (MPI) library Message Passing Interface (MPI) library High performance/$ for certain applications High performance/$ for certain applications
21
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Advanced Strategic Computing Initiative U.S. nuclear policy changes U.S. nuclear policy changes Moratorium on testing Production of new weapons halted Numerical simulations needed to maintain existing stockpile Numerical simulations needed to maintain existing stockpile Five supercomputers costing up to $100 million each Five supercomputers costing up to $100 million each
22
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. ASCI White (10 teraops/sec)
23
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Seeking Concurrency Data dependence graphs Data dependence graphs Data parallelism Data parallelism Functional parallelism Functional parallelism Pipelining Pipelining
24
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Data Dependence Graph Directed graph Directed graph Vertices = tasks Vertices = tasks Edges = dependences Edges = dependences
25
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Data Parallelism Independent tasks apply same operation to different elements of a data set Independent tasks apply same operation to different elements of a data set Okay to perform operations concurrently Okay to perform operations concurrently for i 0 to 99 do a[i] b[i] + c[i] endfor
26
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Functional Parallelism Independent tasks apply different operations to different data elements Independent tasks apply different operations to different data elements First and second statements First and second statements Third and fourth statements Third and fourth statements a 2 b 3 m (a + b) / 2 s (a 2 + b 2 ) / 2 v s - m 2
27
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pipelining Divide a process into stages Divide a process into stages Produce several items simultaneously Produce several items simultaneously
28
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Partial Sums Pipeline
29
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Programming Parallel Computers Extend compilers: translate sequential programs into parallel programs Extend compilers: translate sequential programs into parallel programs Extend languages: add parallel operations Extend languages: add parallel operations Add parallel language layer on top of sequential language Add parallel language layer on top of sequential language Define totally new parallel language and compiler system Define totally new parallel language and compiler system
30
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Extend Language Add functions to a sequential language Add functions to a sequential language Create and terminate processes Synchronize processes Allow processes to communicate
31
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Extend Language (cont.) Advantages Advantages Easiest, quickest, and least expensive Allows existing compiler technology to be leveraged New libraries can be ready soon after new parallel computers are available
32
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Extend Language (cont.) Disadvantages Disadvantages Lack of compiler support to catch errors Easy to write programs that are difficult to debug
33
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Current Status Low-level approach is most popular Low-level approach is most popular Augment existing language with low-level parallel constructs MPI and OpenMP are examples Advantages of low-level approach Advantages of low-level approach Efficiency Portability Disadvantage: More difficult to program and debug Disadvantage: More difficult to program and debug
34
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary (1/2) High performance computing High performance computing U.S. government Capital-intensive industries Many companies and research labs Parallel computers Parallel computers Commercial systems Commodity-based systems
35
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary (2/2) Power of CPUs keeps growing exponentially Power of CPUs keeps growing exponentially Parallel programming environments changing very slowly Parallel programming environments changing very slowly Two standards have emerged Two standards have emerged MPI library, for processes that do not share memory OpenMP directives, for processes that do share memory
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.