מבוא לעיבוד מקבילי דר' גיא תל-צור סמסטר א' 2006. רשימת נושאים שנסקרו עד כה בקורס הרשימה מיועדת לסייע לתלמיד בהכנה לבוחן בכך שהיא מאזכרת מושגים. אולם,

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Practical techniques & Examples
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
ICS 556 Parallel Algorithms Ebrahim Malalla Office: Bldg 22, Room
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
Parallel Strategies Partitioning consists of the following steps –Divide the problem into parts –Compute each part separately –Merge the results Divide.
Introductory Courses in High Performance Computing at Illinois David Padua.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel System Performance CS 524 – High-Performance Computing.
Numerical Algorithms • Matrix multiplication
Summary Background –Why do we need parallel processing? Applications Introduction in algorithms and applications –Methodology to develop efficient parallel.
MPI – An introduction by Jeroen van Hunen What is MPI and why should we use it? Simple example + some basic MPI functions Other frequently used MPI functions.
CISC October Goals for today: Foster’s parallel algorithm design –Partitioning –Task dependency graph Granularity Concurrency Collective communication.
CS 584. Review n Systems of equations and finite element methods are related.
Overview Efficient Parallel Algorithms COMP308. COMP 308 Exam Time allowed : 2.5 hours Answer four questions (out of six). If you attempt to answer more.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Lecture 6 Objectives Communication Complexity Analysis Collective Operations –Reduction –Binomial Trees –Gather and Scatter Operations Review Communication.
1 Tuesday, October 03, 2006 If I have seen further, it is by standing on the shoulders of giants. -Isaac Newton.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
1 ProActive performance evaluation with NAS benchmarks and optimization of OO SPMD Brian AmedroVladimir Bodnartchouk.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Collective Communication
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Designing and Evaluating Parallel Programs Anda Iamnitchi Federated Distributed Systems Fall 2006 Textbook (on line): Designing and Building Parallel Programs.
ECE 1747H : Parallel Programming Message Passing (MPI)
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Parallel Programming Sathish S. Vadhiyar Course Web Page:
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Course Wrap-Up Miodrag Bolic CEG4136. What was covered Interconnection network topologies and performance Shared-memory architectures Message passing.
L17: Introduction to “Irregular” Algorithms and MPI, cont. November 8, 2011.
Definitions Speed-up Efficiency Cost Diameter Dilation Deadlock Embedding Scalability Big Oh notation Latency Hiding Termination problem Bernstein’s conditions.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Early Adopter: Integration of Parallel Topics into the Undergraduate CS Curriculum at Calvin College Joel C. Adams Chair, Department of Computer Science.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Parallel Programming Sathish S. Vadhiyar. 2 Motivations of Parallel Computing Parallel Machine: a computer system with more than one processor Motivations.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
 The need for parallelization  Challenges towards effective parallelization  A multilevel parallelization framework for BEM: A compute intensive application.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Parallel Computing Presented by Justin Reschke
1 ParallelAlgorithms Parallel Algorithms Dr. Stephen Tse Lesson 9.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Introduction to parallel computing concepts and technics
Parallel Programming in C with MPI and OpenMP
Parallel Programming with MPI and OpenMP
Guoliang Chen Parallel Computing Guoliang Chen
More on MPI Nonblocking point-to-point routines Deadlock
Chirag Dekate Department of Computer Science
Introduction to parallelism and the Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
Parallel Programming in C with MPI and OpenMP
CS 584 Lecture 8 Assignment?.
Presentation transcript:

מבוא לעיבוד מקבילי דר' גיא תל-צור סמסטר א' 2006

רשימת נושאים שנסקרו עד כה בקורס הרשימה מיועדת לסייע לתלמיד בהכנה לבוחן בכך שהיא מאזכרת מושגים. אולם, יובהר בזאת כי הרשימה הינה חלקית, בלתי מחייבת ואינה מהווה תחליף ללימוד כל אחד מפרקי השעורים

Parallel Processing – Definition Parallel Processing limitations Complexity Monte Carlo Simulations SMP Flynn’s Taxonomy Speedup Work, p * t P Efficiency, t s / (p * t P ) Amdahl’s law Computation/Commu nication Ratio Load Imbalance

Domain Decomposition Message Passing PVM SPMD MPI MPI_Send() MPI_Recv() MPI_Init() MPI_Finalize() Deadlock Collective communication Shared memory Network topology Bisection Width Diameter Connectivity

Cost Torous (3D) Binary Tree Hypercube Profiling Benchmarking Condor HTC Opportunistic environment ClassAds and Match Making Master-Worker Communicator Point-to-point MPI_Comm_size()

MPI_Comm_rank() Latency Bandwidth Synchronous send/recv Blocking/Non-blocking Broadcast Scatter broadcast Gather Unsafe message passing t comm = t startup + nt data MPI_ANY_SOURCE MPI_ANY_TAG Send/Recv modes –Like ISend/Irecv MPI_Wtime() All-to-All

Partitioning Divide and Conquer Bucket sort Numerical integration using rectangles Gravitational N-Body Problem Barnes-Hut Algorithm Embarrassingly Parallel Computations Low level image processing Mandelbrot Set Pipelined Computations

Sorting Numbers Solving a System of Linear Equations Spawn Personal Condor Parallel regions Work sharing - parallel for (forall) First/last private OpenMP reduction: reduction(+:variable) Fork-join Thread safe Shared data Critical section Locks semaphores

Bernstein’s condition OpenMP sections Load Balancing Termination detection Static/Dynamic Centralized Decentralized Work pool Termination detection using a token or two tokens Synchronous Computations Barrier Jacobi Iteration Allgather Block/ cyclic allocation

Heat Distribution Problem Natural ordering Sequential/Parallel code Partitioning Block/Strip Cellular Automata Game of Life Matrix multiplication Solving a system of linear equations Partitioning into Submatrices Recursive Algorithm

Dense/ Sparse matrices Gaussian Elimination Iterative Methods Finite Difference Method Gauss-Seidel Relaxation Red-Black Ordering Overrelaxation