Parallel Programming in.NET Kevin Luty.  History of Parallelism  Benefits of Parallel Programming and Designs  What to Consider  Defining Types of.

Slides:



Advertisements
Similar presentations
Parallel Processing with OpenMP
Advertisements

James Kolpack, InRAD LLC popcyclical.com. CodeStock is proudly partnered with: Send instant feedback on this session via Twitter: Send a direct message.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Types of Parallel Computers
MULTICORE, PARALLELISM, AND MULTITHREADING By: Eric Boren, Charles Noneman, and Kristen Janick.
Virtual techdays INDIA │ 9-11 February 2011 Parallelism in.NET 4.0 Parag Paithankar │ Technology Advisor - Web, Microsoft India.
Contemporary Languages in Parallel Computing Raymond Hummel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
Gary MarsdenSlide 1University of Cape Town Computer Architecture – Introduction Andrew Hutchinson & Gary Marsden (me) ( ) 2005.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
Parallel Extensions A glimpse into the parallel universe By Eric De Carufel Microsoft.NET Solution Architect at Orckestra
Task Parallel Library (TPL)
Querying Large Databases Rukmini Kaushik. Purpose Research for efficient algorithms and software architectures of query engines.
SJSU SPRING 2011 PARALLEL COMPUTING Parallel Computing CS 147: Computer Architecture Instructor: Professor Sin-Min Lee Spring 2011 By: Alice Cotti.
About Me Microsoft MVP Intel Blogger TechEd Israel, TechEd Europe Expert C++ Book
Lecture 21 Parallel Programming Richard Gesick. Parallel Computing Parallel computing is a form of computation in which many operations are carried out.
Outline  Over view  Design  Performance  Advantages and disadvantages  Examples  Conclusion  Bibliography.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
Parallel Extensions A glimpse into the parallel universe By Eric De Carufel Microsoft.NET Solution Architect at Orckestra
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Data Parallelism Task Parallel Library (TPL) The use of lambdas Map-Reduce Pattern FEN 20141UCN Teknologi/act2learn.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Parallel Computing Presented by Justin Reschke
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Parallel Programming Models EECC 756 David D. McGann 18 May, 1999.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Chapter 4: Multithreaded Programming
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Chapter 4: Threads.
Parallel Programming By J. H. Wang May 2, 2017.
Async or Parallel? No they aren’t the same thing!
The University of Adelaide, School of Computer Science
Constructing a system with multiple computers or processors
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Morgan Kaufmann Publishers
Multicore Hardware and Software Engineering
What is Parallel and Distributed computing?
Staying Afloat in the .NET Async Ocean
Chapter 4: Threads.
Chapter 4: Threads.
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Shared Memory Programming
Constructing a system with multiple computers or processors
Chapter 1 Introduction.
Dr. Tansel Dökeroğlu University of Turkish Aeronautical Association Computer Engineering Department Ceng 442 Introduction to Parallel.
Chapter 4: Threads & Concurrency
Types of Parallel Computers
Lecture 20 Parallel Programming CSE /8/2019.
Prebared by Omid Mustefa.
Presentation transcript:

Parallel Programming in.NET Kevin Luty

 History of Parallelism  Benefits of Parallel Programming and Designs  What to Consider  Defining Types of Parallelism  Design Patterns and Practices  Tools  Supporting Libraries Agenda

 Hardware  ’s see parallel hardware in supercomputers  Early 1980’s – super computer built with /8087 microprocessors  Late 1980’s – Cluster computing power  Moore’s Law  Amdahl’s Law  Most recently – # of cores over clock speeds History of Parallelism

 Moore’s Law Moore’s Law

Amdahl’s Law

 Hardware  ’s sees parallel hardware in supercomputers  Early 1980’s – super computer built with /8087 microprocessors  Late 1980’s – Cluster computing power  Moore’s Law  Amdahl’s Law  Most recently – # of cores over clock speeds History of Parallelism

 Software  One processor means sequential programs  Few APIs that promote/make use of parallel programming  1990’s – No standards were created for parallel programming  By 2000 – Message Passing Interface (MPI), POSIX threads (pthreads), Open Mulitprocessing (OpenMP) History of Parallelism

 Task Parallel Library (TPL) for.NET  Hardware capabilities  Easy to write, PLINQ  You can use all the cores!  Timing  Tools available for debugging  Cost Effective Benefits of Parallel Programming

 Define:  What needs to be done?  What data is being modified?  Current state?  Cost  Synchronization vs. Asynchronization  Output = What pattern is best What To Consider

 Parallelism – Programming with multiple threads, where it is expected that threads will execute at the same time on multiple processors. Its goal is to increase throughput. Defining Types of Parallelism

 Data Parallelism – When there is a lot of data, and the same operations must be performed on each piece of data  Task Parallelism – There are many different operations that can run simultaneously Defining Types of Parallelism

 Perform the same independent operation for each element  Most common problem is not noticing dependencies  How to notice dependencies  Shared variables  Using properties of an object Parallel Loops

 Helpful Properties of TPL .Break and.Stop  CancellationToken  MaxDegreeOfParallelism  Exception Handling  ExceptionAggregate Parallel Loops

 Partitioning  Oversubscription and Undersubscription Parallel Loops

 Making use of Parallel Loops  Makes use of unshared, local variables  Multiple inputs, single output Parallel Aggregation

 Simple Example: Parallel Aggregation

 Data Parallelism  Also known as Fork/Join Pattern  Uses System.Threading.Task namespace  TaskFactory  Invoke  Wait/WaitAny/WaitAll  StartNew  Handling Exceptions Parallel Tasks

 Handling Exceptions  Exceptions are deferred until Task is done  AggregateException  CancellationTokenSource  Can also cancel tasks outside of a Task Parallel Tasks

 Work Stealing Parallel Tasks

 Seen like an assembly line Pipelines

 Uses BlockingCollection  CompleteAdding  Most problems in this design due to starvation/blocking Pipelines

 Future dependencies  Wait/WaitAny/WaitAll  Sequential/Parallel Example Futures

 Model-View-ViewModel Futures

 Continuous Task adding  Complete Small Tasks then Larger Tasks  Binary Trees and Sorting Dynamic Task Parallelism

.NET Performance Profiler (Red-Gate)  JustTrace (Telerik)  GlowCode (Electric Software)  Performance Profiler (Visual Studio 2010 Ultimate)  Concurrency Visualizer  CPU Performance  Memory Management Tools

 Task Parallel Library  PLINQ (Parallel Language Integrated Query)  Easy to learn  Rx (Reactive Extensions) Supportive Libraries for.NET

  Campbell, Colin, et al. Parallel Programming with Microsoft.NET: Design Patterns for Decomposition and Coordination on Multicore Architectures. Microsoft, Print.  Data Parallelism (n.d.). In Wikipedia. Retrieved from  Hillar, Gaston C. Professional Parallel Programming with C#: Master Parallel Extensions with.NET 4. Indiana: Wiley Print.  Rx Extensions (n.d.). In Microsoft. Retrieved from  T. G. Mattson, B. A. Sanders, and B. L. Massingill. Patterns for Parallel Programming. Addison-Wesley, References