Parallelism and Concurrency Two different terms for potential vs. actual parallelism Actual parallelism (called parallelism) is when code can execute in physically parallel hardware E.g., on multiple hosts, or on multiple cores of the same host Can achieve significant speedup in program execution times Must communicate to aggregate results, which slows things down Logical parallelism (called concurrency) is when code appears parallel but may be interleaving on the same core E.g., part of one code sequence runs, then part of another, etc. Parallelism and concurrency share some key issues E.g., asynchrony and interleaving of what happens when May need to represent sequence and/or timing semantics May need special handling to avoid semantic hazards
Concurrency and Synchronization Issues Two concurrent or parallel activities may be “racing” to reach a code section involving a shared resource E.g., thread 1 does x=A; then y=B; and thread 2 does x=C; then y=D; but the statements inter-leave to produce x==C and y==B (which may be an invalid state) Bad inter-leavings can be avoided via synchronization E.g., each thread waits for a lock on the critical section before writing so either x==A && y==B or x==C && y==D However, synchronization can lead to deadlock E.g., thread 1 takes lock 1 and needs lock 2, thread 2 takes lock 2 and needs lock one (called a deadly embrace) Protocols must be followed to avoid or break deadlock E.g., each thread acquires lock 1 before attempting lock 2
Processes vs. Threads With modern hardware, protecting the memory etc. used by a program is valuable for reliability E.g., each program runs in a process with its own memory, and any attempt to access another’s memory crashes it Inter-program parallelism is thus made more reliable May want intra-program parallelism/concurrency too E.g., to divide and conquer a matrix multiplication, etc. Offers potential performance gains if costs can be kept low Threads offer much lower context switching costs Don’t have to save and restore memory areas, just program counter and stack (may optimize stack in some languages) Can achieve true parallelism with threads, e.g., by binding them to different cores either statically or dynamically
Today’s Studio Exercises We’ll code up ideas from Scott Chapter 12.1 and 12.2 Using threads to run different code sequences concurrently Looking at safe vs. unsafe cases involving multithreading Today’s exercises are in C++ Please take advantage of the on-line tutorial and reference manual pages that are linked on the course web site The provided Makefile also may be helpful for enrichment As always, please ask for help as needed When done, send e-mail with your answers to cse425@seas.wustl.edu, with subject line “Concurrency Studio I”