Task Parallel Library (TPL)

Slides:



Advertisements
Similar presentations
NGS computation services: API's,
Advertisements

Computer Science 320 Clumping in Parallel Java. Sequential vs Parallel Program Initial setup Execute the computation Clean up Initial setup Create a parallel.
INTEL CONFIDENTIAL Threading for Performance with Intel® Threading Building Blocks Session:
An overview of… Luis Guerrero Plain Concepts
Chapter 3 Brute Force Brute force is a straightforward approach to solving a problem, usually directly based on the problem’s statement and definitions.
James Kolpack, InRAD LLC popcyclical.com. CodeStock is proudly partnered with: Send instant feedback on this session via Twitter: Send a direct message.
Practical techniques & Examples
CSE 1302 Lecture 21 Exception Handling and Parallel Programming Richard Gesick.
Threads in C# Threads in C#.
Point-to-Point Communication Self Test with solution.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Pipelined Computations Divide a problem into a series of tasks A processor completes a task sequentially and pipes the results to the next processor Pipelining.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
An Introduction to Internetworking. Algorithm for client-server communication with UDP (connectionless) A SERVER A CLIENT Create a server-socket (listener)and.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Mapping Techniques for Load Balancing
Slide 1-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 1.
PARALLEL PROGRAMMING ABSTRACTIONS 6/16/2010 Parallel Programming Abstractions 1.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Collection types Collection types.
Parallel Programming in.NET Kevin Luty.  History of Parallelism  Benefits of Parallel Programming and Designs  What to Consider  Defining Types of.
Abstract Data Types (ADTs) Data Structures The Java Collections API
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
CYBORG Domain Independent Distributed Database Retrieval System Alok Khemka Kapil Assudani Kedar Fondekar Rahul Nabar.
Multiprocessor and Real-Time Scheduling Chapter 10.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Lecture 21 Parallel Programming Richard Gesick. Parallel Computing Parallel computing is a form of computation in which many operations are carried out.
Data structures for concurrency 1. Ordinary collections are not thread safe Namespaces System.Collections System.Collections.Generics Classes List, LinkedList,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
Computing Simulation in Orders Based Transparent Parallelizing Pavlenko Vitaliy Danilovich, Odessa National Polytechnic University Burdeinyi Viktor Viktorovych,
Problem-solving on large-scale clusters: theory and applications Lecture 4: GFS & Course Wrap-up.
Module 8 Enhancing User Interface Responsiveness.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Data Parallelism Task Parallel Library (TPL) The use of lambdas Map-Reduce Pattern FEN 20141UCN Teknologi/act2learn.
Project18’s Communication Drawing Design By: Camilo A. Silva BIOinformatics Summer 2008.
MPI and OpenMP.
Inside LINQ to Objects How LINQ to Objects work Inside LINQ1.
Chapter 10 Algorithmic Thinking. Learning Objectives Explain similarities and differences among algorithms, programs, and heuristic solutions List the.
IAP C# 2011 Lecture 2: Delegates, Lambdas, LINQ Geza Kovacs.
AUTO-GC: Automatic Translation of Data Mining Applications to GPU Clusters Wenjing Ma Gagan Agrawal The Ohio State University.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Introduction to Computer Organization Pipelining.
Parallel Computing Presented by Justin Reschke
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Host and Application Security Lesson 8: You are you… mostly.
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
 Parallel Programming in C# Pro.Net Programming in C#, Adam Freeman.
About Hadoop Hadoop was one of the first popular open source big data technologies. It is a scalable fault-tolerant system for processing large datasets.
Chapter 3: Process Concept
Operating Systems (CS 340 D)
OPERATING SYSTEMS CS3502 Fall 2017
Async or Parallel? No they aren’t the same thing!
Critical sections, locking, monitors, etc.
A task-based implementation for GeantV
Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Topic 17 NVIDIA GPU Computational Structures Prof. Zhang Gang
CDA 3101 Spring 2016 Introduction to Computer Organization
How do I find my PDF password with simple operations.
Kiran Subramanyam Password Cracking 1.
Multiprocessor and Real-Time Scheduling
Lecture 20 Parallel Programming CSE /8/2019.
Chapter 3: Process Management
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

Task Parallel Library (TPL) Higher level abstraction API for concurrency in C# Task parallel library

Data parallelism vs. task parallelism Two ways to partition a “problem” into tasks Data parallelism (master-slave) Task parallelism (pipelining) Partition the data, and give each task a part of the data set. Works well if data are independent Master Coordinates the work ONE master Slaves do the work Many slaves Partition the algorithm, and give each task a part the algorithm. Works well if each part of the algorithm takes about the same time to execute Task parallel library

Example: Password cracking, dictionary attack A brute force algorithm We need lots of computer resources to make it run fast We have a list of usernames + encrypted passwords We know which encryption algorithm was used There is no known decryption algorithm We want to find (some of) the passwords in clear text. We assume that (some) users have a password that is present in a dictionary Or is a variation of a word from a dictionary General algorithm Read a word from the dictionary Make a lot of variations from the word, like 123word, word22, Word, etc. Encrypt all the variations Compare each encrypted word to all the encrypted passwords from the password file If there is a match, we have found a password Task parallel library

Example: Password cracking, dictionary attack Data parallelism Task parallelism (pipelining) The dictionary is divided into a number of sub-sets. Each sub-set is give to a task that performs all steps of the algorithm No communication between tasks The algorithm is divided into a number of steps. Each task executes a step in the algorithm Each step sends data to the next step Lots of communication between tasks Task parallel library

Task Parallel Library (TPL) The “Task Parallel Library (TPL)” is part of the C# API Namespace System.Threading.Task Some interesting TPL classes Task For Task parallelism We have used Task.run(Action action) to ask the thread pool to run a thread But there is much more to the Task class … Parallel For Data parallelism (and Task parallelism) Task parallel library

Class Task, some methods and properties Task task = Task.run(Action action) Action is a void method, no parameters: void M() { … } Task task = new Task(Action action) taskObject.start() Runs the previously created taskObject in the thread pool Properties IsCompleted, IsFaulted, IsCancelled Property Id Task.CurrentId Task parallel library

Task with return values Task<TResult> task = Task.run( Func<TResult> function) Runs the action in the thread pool Func (function) is a method that returns TResult, no parameters Task<TResult> task = new Task(Func<TResult> function) Constructor The Result property Contains the result of the task Blocks until the result is ready Task parallel library

The class Parallel Very high level of abstraction Level 1: Thread Level 2: Task Level 3: Parallel Parallel.Invoke(action, action, …, action) Actions are invoked in parallel. Returns when the last action has finished Task parallelism: Different tasks run in parallel. Tasks are generally not similar Usually a few (large) tasks Efficient when actions need almost same amount of time to complete Example: Gaston Hillar: Professional Parallel Programming with C#, example 2_3 Task parallel library

Parallel for loop Parallel.For(fromValue, toValue, Action<int>) Similar to an ordinary for loop – but parallel, of course. Action is invoked for each value between fromValue (inclusive) and toValue (not inclusive) Data parallelism: Each task is given a part of the data set . Tasks are similar. Usually a lot of (small) tasks. Efficient even if the tasks needs different times to complete. Example: Gaston Hillar: Professional Parallel Programming with C# example 2_5 Task parallel library

Parallel foreach loop Parallel.ForEach(IEnumrable list, Action) Similar to ordinary foreach loop – but parallel. Action is invoked for each element in the IEnumerable (think “list” or “array”) Example: Gaston Hillar: Professional Parallel Programming with C# example 2_11 Often used with a Partitioner Partitioner divides a range of data into a number of sub-ranges Task parallel library

References and further readings MSDN Task Parallel Library (TPL) http://msdn.microsoft.com/en-us/library/dd460717(v=vs.110).aspx Joseph Albahari Threading in C#, Part 5: Parallel Programming, The Parallel Class http://www.albahari.com/threading/part5.aspx#_The_Parallel_Class Task Parallelism http://www.albahari.com/threading/part5.aspx#_Task_Parallelism Gaston C. Hillar Professional Parallel Programming with C#, Master Parallel Extensions with .NET 4, Wrox/Wiley 2011 Chapter 2: Imperative Data Parallelism, page 29-72 Chapter 3: Imperative Task Parallelism, page 73 – 102. Task parallel library