CILK: An Efficient Multithreaded Runtime System

Slides:



Advertisements
Similar presentations
© 2009 Charles E. Leiserson and Pablo Halpern1 Introduction to Cilk++ Programming PADTAD July 20, 2009 Cilk, Cilk++, Cilkview, and Cilkscreen, are trademarks.
Advertisements

MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Cilk NOW Based on a paper by Robert D. Blumofe & Philip A. Lisiecki.
U NIVERSITY OF M ASSACHUSETTS, A MHERST – Department of Computer Science The Implementation of the Cilk-5 Multithreaded Language (Frigo, Leiserson, and.
CILK: An Efficient Multithreaded Runtime System. People n Project at MIT & now at UT Austin –Bobby Blumofe (now UT Austin, Akamai) –Chris Joerg –Brad.
Image Processing Using Cilk 1 Parallel Processing – Final Project Image Processing Using Cilk Tomer Y & Tuval A (pp25)
1 Trees. 2 Outline –Tree Structures –Tree Node Level and Path Length –Binary Tree Definition –Binary Tree Nodes –Binary Search Trees.
1 Pertemuan 20 Run-Time Environment Matakuliah: T0174 / Teknik Kompilasi Tahun: 2005 Versi: 1/6.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for Threads.
1 Chapter 4 Threads Threads: Resource ownership and execution.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Cilk CISC 879 Parallel Computation Erhan Atilla Avinal.
Presentation Overview 1. Models of Parallel Computing The evolution of the conceptual framework behind parallel systems. 2.Grid Computing The creation.
Binary Trees Chapter 6.
Juan Mendivelso.  Serial Algorithms: Suitable for running on an uniprocessor computer in which only one instruction executes at a time.  Parallel Algorithms:
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Multithreaded Algorithms Andreas Klappenecker. Motivation We have discussed serial algorithms that are suitable for running on a uniprocessor computer.
CS 403: Programming Languages Lecture 2 Fall 2003 Department of Computer Science University of Alabama Joel Jones.
10/16/ Realizing Concurrency using the thread model B. Ramamurthy.
Lecture 2 Foundations and Definitions Processes/Threads.
Overview Work-stealing scheduler O(pS 1 ) worst case space small overhead Narlikar scheduler 1 O(S 1 +pKT  ) worst case space large overhead Hybrid scheduler.
 2004 Deitel & Associates, Inc. All rights reserved. 1 Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Scheduling Multithreaded Computations By Work-Stealing Robert D. Blumofe The University of Texas, Austin Charles E. Leiserson, MIT Laboratory for Computer.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
1 Cilk Chao Huang CS498LVK. 2 Introduction A multithreaded parallel programming language Effective for exploiting dynamic, asynchronous parallelism (Chess.
7/9/ Realizing Concurrency using Posix Threads (pthreads) B. Ramamurthy.
1 Priority Queues (Heaps). 2 Priority Queues Many applications require that we process records with keys in order, but not necessarily in full sorted.
Chapter 4 – Thread Concepts
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
"Teachers open the door, but you must enter by yourself. "
Planning & System installation
5.13 Recursion Recursive functions Functions that call themselves
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Prabhanjan Kambadur, Open Systems Lab, Indiana University
Parallel Programming By J. H. Wang May 2, 2017.
Chapter 4 – Thread Concepts
Distributed Dynamic BDD Reordering
Hashing Exercises.
Capriccio – A Thread Model
Chapter 3 Process Management.
Algorithm Analysis (not included in any exams!)
Realizing Concurrency using Posix Threads (pthreads)
Chapter 4: Threads.
Algorithm design and Analysis
Realizing Concurrency using the thread model
Chapter 6 Intermediate-Code Generation
Multithreaded Programming in Cilk LECTURE 1
"Teachers open the door, but you must enter by yourself. "
Trees CMSC 202, Version 5/02.
Introduction to CILK Some slides are from:
A Robust Data Structure
Fast Communication and User Level Parallelism
Threads Chapter 4.
CMSC 202 Trees.
Transactions with Nested Parallelism
Realizing Concurrency using Posix Threads (pthreads)
Realizing Concurrency using the thread model
Realizing Concurrency using Posix Threads (pthreads)
Cilk A C language for programming dynamic multithreaded applications on shared-memory multiprocessors. Example applications: virus shell assembly graphics.
Atlas: An Infrastructure for Global Computing
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Foundations and Definitions
Outline Chapter 3: Processes Chapter 4: Threads So far - Next -
Chapter 3: Process Concept
Introduction to CILK Some slides are from:
Topic 2b ISA Support for High-Level Languages
Presentation transcript:

CILK: An Efficient Multithreaded Runtime System

People Project at MIT & now at UT Austin Bobby Blumofe (now UT Austin, Akamai) Chris Joerg Brad Kuszmaul (now Yale) Charles Leiserson (MIT, Akamai) Keith Randall (Bell Labs) Yuli Zhou (Bell Labs)

Outline Introduction Programming environment The work-stealing thread scheduler Performance of applications Modeling performance Proven Properties Conclusions

Introduction Why multithreading? Cilk programmer optimizes: To implement dynamic, asynchronous, concurrent programs. Cilk programmer optimizes: total work critical path A Cilk computation is viewed as a dynamic, directed acyclic graph (dag)

Introduction ...

Introduction ... Cilk program is a set of procedures A procedure is a sequence of threads Cilk threads are: represented by nodes in the dag Non-blocking: run to completion: no waiting or suspension: atomic units of execution

Introduction ... Threads can spawn child threads downward edges connect a parent to its children A child & parent can run concurrently. Non-blocking threads  a child cannot return a value to its parent. The parent spawns a successor that receives values from its children

Introduction ... A thread & its successor are parts of the same Cilk procedure. connected by horizontal arcs Children’s returned values are received before their successor begins: They constitute data dependencies. Connected by curved arcs

Introduction ...

Introduction: Execution Time Execution time of a Cilk program using P processors depends on: Work (T1): time for Cilk program with 1 processor to complete. Critical path (T): the time to execute the longest directed path in the dag. TP >= T1 / P (not true for some searches) TP >= T

Introduction: Scheduling Cilk uses run time scheduling called work stealing. Works well on dynamic, asynchronous, MIMD-style programs. For “fully strict” programs, Cilk achieves asymptotic optimality for: space, time, & communication

Introduction: language Cilk is an extension of C Cilk programs are: preprocessed to C linked with a runtime library

Programming Environment Declaring a thread: thread T ( <args> ) { <stmts> } T is preprocessed into a C function of 1 argument and return type void. The 1 argument is a pointer to a closure

Environment: Closure A closure is a data structure that has: a pointer to the C function for T a slot for each argument (inputs & continuations) a join counter: count of the missing argument values A closure is ready when join counter == 0. A closure is waiting otherwise. They are allocated from a runtime heap

Environment: Continuation A Cilk continuation is a data type, denoted by the keyword cont. cont int x; It is a global reference to an empty slot of a closure. It is implemented as 2 items: a pointer to the closure; (what thread) an int value: the slot number. (what input)

Environment: Closure

Environment: spawn spawn T (<args> ) spawn T (k, ?x); To spawn a child, a thread creates its closure: spawn T (<args> ) creates child’s closure sets available arguments sets join counter To specify a missing argument, prefix with a “?” spawn T (k, ?x);

Environment: spawn_next A successor thread is spawned the same way as a child, except the keyword spawn_next is used: spawn_next T(k, ?x) Children typically have no missing arguments; successors do.

Explicit continuation passing Nonblocking threads  a parent cannot block on children’s results. It spawns a successor thread. This communication paradigm is called explicit continuation passing. Cilk provides a primitive to send a value from one closure to another.

send_argument Cilk provides the primitive send_argument( k, value ) sends value to the argument slot of a waiting closure specified by continuation k. spawn_next successor parent spawn send_argument child

Cilk Procedure for computing a Fibonacci number thread int fib ( cont int k, int n ) { if ( n < 2 ) send_argument( k, n ); else { cont int x, y; spawn_next sum ( k, ?x, ?y ); spawn fib ( x, n - 1 ); spawn fib ( y, n - 2 ); } thread sum ( cont int k, int x, int y ) { send_argument ( k, x + y );

Nonblocking Threads: Advantages Shallow call stack. Simplify runtime system: Completed threads leave C runtime stack empty. Portable runtime implementation

Nonblocking Threads: Disdvantages Burdens programmer with explicit continuation passing.

Work-Stealing Scheduler The concept of work-stealing goes at least as far back as 1981. Work-stealing: a process with no work selects a victim from which to get work. it gets the shallowest thread in the victim’s spawn tree. In Cilk, thieves choose victims randomly.

Thread Level

Stealing Work: The Ready Deque Each closure has a level: level( child ) = level( parent ) + 1 level( successor ) = level( parent ) Each processor maintains a ready deque: Contains ready closures The Lth element contains the list of all ready closures whose level is L.

Ready deque if ( ! readyDeque .isEmpty() ) take deepest thread else steal shallowest thread from readyDeque of randomly selected victim

Why Steal Shallowest closure? Shallow threads probably produce more work, therefore, reduce communication. Shallow threads more likely to be on critical path.

Readying a Remote Closure If a send_argument makes a remote closure ready, put closure on sending processor’s readyDeque  extra communication. Done to make scheduler provably good Putting on local readyDeque works well in practice.

Performance of Application Tserial = time for C program T1 = time for 1-processor Cilk program Tserial /T1 = efficiency of the Cilk program Efficiency is close to 1 for programs with moderately long threads: Cilk overhead is small.

Performance of Applications T1/TP = speedup T1/ T = average parallelism If average parallelism is large then speedup is nearly perfect. If average parallelism is small then speedup is much smaller.

Performance Data

Performance of Applications Application speedup = efficiency X speedup = ( Tserial /T1 ) X ( T1/TP ) = Tserial / TP

Modeling Performance TP >= max( T , T1 / P ) A good scheduler should come close to these lower bounds.

Modeling Performance Empirical data suggests that for Cilk: TP  c1 T1 / P + c  T , where c1  1.067 & c   1.042 If T1 / T > 10P then critical path does not affect TP.

Proven Property: Time Time: Including overhead, TP = O( T1/P + T ), which is asymptotically optimal

Conclusions We can predict the performance of a Cilk program by observing machine-independent characteristics: Work Critical path when the program is fully-strict. Cilk’s usefulness is unclear for other kinds of programs (e.g., iterative programs).

Conclusions ... Explicit continuation passing a nuisance. It subsequently was removed (with more clever pre-processing).

Conclusions ... Great system research has a theoretical underpinning. Such research identifies important properties of the systems themselves, or of our ability to reason about them formally. Cilk identified 3 significant system properties: Fully strict programs Non-blocking threads Randomly choosing a victim.

END

The Cost of Spawns A spawn is about an order of magnitude more costly than a C function call. Spawned threads running on parent’s processor can be implemented more efficiently than remote spawns. This usually is the case. Compiler techniques can exploit this distinction.

Communication Efficiency A request is an attempt to steal work (the victim may not have work). Requests/processor & steals/processor both grow as the critical path grows.

Proven Properties: Space A fully strict program’s threads send arguments only to its parent’s successors. For such programs, space, time, & communication bounds are proven. Space: SP <= S1 P. There exists a P-processor execution for which this is asymptotically optimal.

Proven Properties: Communication Communication: The expected # of bits communicated in a P-processor execution is: O( T P SMAX ) where SMAX denotes its largest closure. There exists a program such that, for all P, there exists a P-processor execution that communicates k bits, where k > c T P SMAX, for some constant, c.