Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Ulf Norell.

Slides:



Advertisements
Similar presentations
Parallel Haskell Tutorial: The Par Monad Simon Marlow Ryan Newton Simon Peyton Jones.
Advertisements

The many faces of TM Tim Harris. Granularity Distributed, large-scale atomic actions Composable shared memory data structures Leaf shared memory data.
Concurrency unlocked transactional memory for composable concurrency Tim Harris Maurice Herlihy Simon Marlow Simon Peyton Jones.
Enabling Speculative Parallelization via Merge Semantics in STMs Kaushik Ravichandran Santosh Pande College.
Software Transactional Memory and Conditional Critical Regions Word-Based Systems.
Kathleen Fisher cs242 Reading: “A history of Haskell: Being lazy with class”,A history of Haskell: Being lazy with class Section 6.4 and Section 7 “Monads.
Kathleen Fisher cs242 Reading: “Beautiful Concurrency”,Beautiful Concurrency “The Transactional Memory / Garbage Collection Analogy”The Transactional Memory.
Software Transactional Memory Steve Severance – Alpha Heavy Industries.
Functional Programming Universitatea Politehnica Bucuresti Adina Magda Florea
Tim Harris Researcher Microsoft Corporation. Source: Table from
Simon Peyton Jones (Microsoft Research) Tokyo Haskell Users Group April 2010.
Transactional Memory (TM) Evan Jolley EE 6633 December 7, 2012.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Haskell and HaskellVV. What is Haskell? “Haskell is a polymorphically typed, lazy, pure functional language.” – So what.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
Semi-Explicit Parallel Programming in Haskell Satnam Singh Microsoft Research Cambridge Leeds2009.
Laziness and Infinite Datastructures Koen Lindström Claessen.
1 Lecture 7: Transactional Memory Intro Topics: introduction to transactional memory, “lazy” implementation.
Concurrency in Ada What concurrency is all about Relation to operating systems Language facilities vs library packages POSIX threads Ada concurrency Real.
Exceptions and side-effects in atomic blocks Tim Harris.
Language Support for Lightweight transactions Tim Harris & Keir Fraser Presented by Narayanan Sundaram 04/28/2008.
Concurrency and Software Transactional Memories Satnam Singh, Microsoft Faculty Summit 2005.
Sparkle A theorem prover for the functional language Clean Maarten de Mol University of Nijmegen February 2002.
Department of Computer Science Presenters Dennis Gove Matthew Marzilli The ATOMO ∑ Transactional Programming Language.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering Introduction to Functional Programming Notes for CSCE 190 Based on Sebesta,
Intel Concurrent Collections (for Haskell) Ryan Newton*, Chih-Ping Chen*, Simon Marlow+ *Intel +Microsoft Research Software and Services Group Jul 29,
Distributed Deadlocks and Transaction Recovery.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Parallel & Concurrent Programming: Occam Vitaliy Lvin University of Massachusetts.
Intel Concurrent Collections (for Haskell) Ryan Newton, Chih-Ping Chen, Simon Marlow Software and Services Group Jul 27, 2010.
Lightweight Concurrency in GHC KC Sivaramakrishnan Tim Harris Simon Marlow Simon Peyton Jones 1.
David Walker Optional Reading: “Beautiful Concurrency”, “The Transactional Memory / Garbage Collection Analogy” “A Tutorial on Parallel and Concurrent.
Programming Paradigms for Concurrency Part 2: Transactional Memories Vasu Singh
Kathleen Fisher cs242 Reading: “Beautiful Concurrency”,Beautiful Concurrency “The Transactional Memory / Garbage Collection Analogy”The Transactional Memory.
1 Concurrent Languages – Part 1 COMP 640 Programming Languages.
CSE 230 Concurrency: STM Slides due to: Kathleen Fisher, Simon Peyton Jones, Satnam Singh, Don Stewart.
Problems with Send and Receive Low level –programmer is engaged in I/O –server often not modular –takes 2 calls to get what you want (send, followed by.
Advanced Functional Programming 2009 Ulf Norell (lecture by Jean-Philippe Bernardy)
Games Development 2 Concurrent Programming CO3301 Week 9.
Optimistic Design 1. Guarded Methods Do something based on the fact that one or more objects have particular states  Make a set of purchases assuming.
Internet Software Development Controlling Threads Paul J Krause.
Sound Haskell Dana N. Xu University of Cambridge Joint work with Simon Peyton Jones Microsoft Research Cambridge Koen Claessen Chalmers University of Technology.
Synchronized and Monitors. synchronized is a Java keyword to denote a block of code which must be executed atomically (uninterrupted). It can be applied.
Haskell - A Perspective Presented by Gábor Lipták April 2011.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
CS162 Week 5 Kyle Dewey. Overview Announcements Reactive Imperative Programming Parallelism Software transactional memory.
0 Odds and Ends in Haskell: Folding, I/O, and Functors Adapted from material by Miran Lipovaca.
Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. L13-1 October 26, 2006http:// Monadic Programming October.
QIAN XI COS597C 10/28/2010 Explicit Concurrent Programming in Haskell.
Software Transactional Memory Should Not Be Obstruction-Free Robert Ennals Presented by Abdulai Sei.
How D can make concurrent programming a piece of cake Bartosz Milewski D Programming Language.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
Advanced Functional Programming Tim Sheard 1 Lecture 17 Advanced Functional Programming Tim Sheard Oregon Graduate Institute of Science & Technology Lecture:
1 Static Contract Checking for Haskell Dana N. Xu University of Cambridge Joint work with Simon Peyton Jones Microsoft Research Cambridge Koen Claessen.
AtomCaml: First-class Atomicity via Rollback Michael F. Ringenburg and Dan Grossman University of Washington International Conference on Functional Programming.
Parallel Data Structures. Story so far Wirth’s motto –Algorithm + Data structure = Program So far, we have studied –parallelism in regular and irregular.
Chapter 7: Repetition Structure (Loop) Department of Computer Science Foundation Year Program Umm Alqura University, Makkah Computer Programming Skills.
(c) University of Washington10-1 CSC 143 Java Errors and Exceptions Reading: Ch. 15.
Haskell on a Shared-Memory Multiprocessor Tim Harris Simon Marlow Simon Peyton Jones.
R 1 Transactional abstractions: beyond atomic update Tim Harris r.
Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Patrik Jansson.
Laziness and Infinite Datastructures
Parallelism and Concurrency
Parallelism and Concurrency
(Nested) Open Memory Transactions in Haskell
Software Transactional Memory
Control Flow Chapter 6.
Lecture 6: Transactions
Software Engineering and Architecture
Presentation transcript:

Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Ulf Norell

Expressing Parallelism In a pure, lazy language  Evaluation is done when needed  Evaluation order does not affect meaning of program  One can do more work than what is needed

Two Primitives seq :: a -> b -> b par :: a -> b -> b seq x y: Evaluate x first, and then return y par x y: Evaluate x in parallel, and immediately return y

Example f’ x y z = x `par` y `par` z `par` f x y z Idea: Write normal program first, then add parallelism to speed it up

QuickSort qsort :: (Ord a) => [a] -> [a] qsort [] = [] qsort [x] = [x] qsort (x:xs) = losort `par` hisort `par` losort ++ (x:hisort) where losort = qsort [y | y <- xs, y < x ] hisort = qsort [y | y = x ]

QuickSort (II) qsort :: (Ord a) => [a] -> [a] qsort [] = [] qsort [x] = [x] qsort (x:xs) = force losort `par` force hisort `par` losort ++ (x:hisort) where losort = qsort [y | y <- xs, y < x ] hisort = qsort [y | y = x ] force :: [a] -> () force [] = () force (x:xs) = x `seq` force xs

Parallel Map pmap :: (a -> b) -> [a] -> [b] pmap f [] = [] pmap f (x:xs) = fx `par` fxs `par` fx:fxs where fx = f x fxs = pmap f xs

Parallel Strategies type Done = () type Strategy a = a -> Done using :: a -> Strategy a -> a a `using` strat = strat a `seq` a

Parallel Strategies inSeq, inPar :: a -> Strategy b inSeq x = \y -> x `seq` () inPar x = \y -> x `par` () parList :: Strategy a -> Strategy [a] parList strat [] = () parList strat (x:xs) = strat x `par` parList strat xs

Parallel Strategies pmap :: Strategy b -> (a -> b) -> [a] -> [b] pmap strat f xs = map f xs `using` parList strat

More... Implemented in GHC  Control.Parallel  Control.Parallel.Strategies Also look at:  Control.Concurrent (-threaded)  Control.Monad.STM

Concurrent Programming Processes  Concurrency  Parallelism Shared resources  Communication  Locks  Blocking

Concurrent Haskell (v1.0) Control.Concurrent fork :: IO a -> IO Pid kill:: Pid -> IO () type MVar a newEmptyMVar:: IO (MVar a) takeMVar:: MVar a -> IO a putMVar:: MVar a -> a -> IO () starting/killing processes a shared resource blocking actions

Concurrent Haskell (v1.0) Control.Concurrent.Chan type Chan a newChan:: IO (Chan a) readChan:: Chan a -> IO a writeChan:: Chan a -> a -> IO () an unbounded channel

Typical Concurrent Programming Today Use MVars (or similar concepts) to implement ”locks”  Grab the lock Block if someone else has it  Do your thing  Release the lock

Problems With Locking Races  Forgotten lock Deadlock  Grabbing/releasing locks in wrong order Error recovery  Invariants  Locks

The Biggest Problem Locks are not compositional ! Compositional = build a working system from working pieces action1 = withdraw a 100action2 = deposit b 100 action3 = do withdraw a 100 deposit b 100 Inconsistent state

Solution (?) Expose the locks action3 = do lock a lock b withdraw a 100 deposit b 100 release a release b Danger of deadlock! if a < b then do lock a; lock b else do lock b; lock a

More Problems action4 = do... action5 = do... action6 = action4 AND action5 Impossible! Need to keep track of all locks of an action, and compose these

Conclusion Programming with locks is  Not compositional  Not scalable  Gives you a head ache  Leads to code with errors ... A new concurrent programming paradigm is sorely needed

Idea Borrow ideas from database people  Transactions Add ideas from functional programming  Computations are first-class values  What side-effects can happen where is controlled Et voila!

Software Transactional Memory (STM) First ideas in 1993 New developments in 2005  Simon Peyton Jones  Simon Marlow  Tim Harris  Maurice Herlihy

Atomic Transactions action3 = atomically { withdraw a 100 deposit b 100 } ”write sequential code, and wrap atomically around it”

How Does It Work? Execute body without locks Each memory acces is logged No actual update is performed At the end, we try to commit the log to memory Commit may fail, then we retry the whole atomic block action3 = atomically { withdraw a 100 deposit b 100 }

Transactional Memory No locks, so no race conditions No locks, so no deadlocks Error recovery is easy; an exception aborts the whole block and retries Simple code, and scalable

Caveats Absolutely forbidden:  To read a transaction variable outside an atomic block  To write to a transaction variable outside an atomic block  To make actual changes inside an atomic block...

Simon’s Missile Program action3 = atomically { withdraw a 100 launchNuclearMissiles deposit b 100 } No side effects allowed!

STM Haskell Control.Concurrent.STM First (and so far only?) fully-fledged implementation of STM Implementations for C++, Java, C# on the way...  Difficult to solve the problems In Haskell, it is easy!  Controlled side-effects

STM Haskell Control.Concurrent.STM type STM a instance Monad STM type TVar a newTVar :: a -> STM (TVar a) readTVar:: TVar a -> STM a writeTVar:: TVar a -> a -> STM () atomically:: STM a -> IO a

Example type Resource = TVar Int putR :: Resource -> Int -> STM () putR r i = do v <- readTVar r writeTVar r (v+i) main = do... atomically (putR r 13)...

Example retry:: STM a getR :: Resource -> Int -> STM () getR r i = do v <- readTVar r if v < i then retry else writeTVar r (v-i) main = do... atomically (do getR r1 4 putR r2 4)...

Retrying An atomic block is retried when  The programmer says so  The commit at the end fails Before retrying, we wait until one of the variables used in the atomic block is changed  Why? Referential transparency!

Compositional Choice orElse:: STM a -> STM a -> STM a main = do... atomically (getR r1 4 `orElse` getR r2 4)... m1 `orElse` (m2 `orElse` m3) = (m1 `orElse` m2) `orElse` m3 retry `orElse` m = m m `orElse` retry = m

Blocking or not? nonBlockGetR :: Resource -> Int -> STM Bool nonBlockGetR r i = do getR r i return True `orElse` return False Choice of blocking/non- blocking up to the caller, not the method itself

Example: MVars MVars can be implemented using TVars type MVar a = TVar (Maybe a)

New Things in STM Haskell Safe transactions through type safety  Degenerate monad STM We can only access TVars TVars can only be accessed in STM monad  Referential transparency Explicit retry -- expressiveness Compositional choice -- expressiveness

Problems in C++/Java/C# Retry semantics IO in atomic blocks Access of transaction variables outside of atomic blocks Access to regular variables inside of atomic blocks

STM Haskell Control.Concurrent.STM type STM a instance Monad STM type TVar a newTVar :: a -> STM (TVar a) readTVar:: TVar a -> STM a writeTVar:: TVar a -> a -> STM () atomically:: STM a -> IO a retry:: STM a orElse:: STM a -> STM a -> STM a