Shared Memory Programming

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Synchronization and Deadlocks
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
8a-1 Programming with Shared Memory Threads Accessing shared data Critical sections ITCS4145/5145, Parallel Programming B. Wilkinson Jan 4, 2013 slides8a.ppt.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency CS 510: Programming Languages David Walker.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
29-Jun-15 Java Concurrency. Definitions Parallel processes—two or more Threads are running simultaneously, on different cores (processors), in the same.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Multiprocessors and Multi-computers Multi-computers –Distributed address space accessible by local processors –Requires message passing –Programming tends.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
12/1/98 COP 4020 Programming Languages Parallel Programming in Ada and Java Gregory A. Riccardi Department of Computer Science Florida State University.
1 Concurrent Languages – Part 1 COMP 640 Programming Languages.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
CSCI-455/552 Introduction to High Performance Computing Lecture 19.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Chapter 4 – Threads (Pgs 153 – 174). Threads  A "Basic Unit of CPU Utilization"  A technique that assists in performing parallel computation by setting.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
PZ12B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12B - Synchronization and semaphores Programming Language.
Synchronization and semaphores Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Lecture 6: Monitors & Semaphores. Monitor Contains data and procedures needed to allocate shared resources Accessible only within the monitor No way for.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Multiprocessors – Locks
Chapter 4 – Thread Concepts
Distributed Shared Memory
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
Chapter 4 – Thread Concepts
Other Important Synchronization Primitives
Computer Engg, IIT(BHU)
5-High-Performance Embedded Systems using Concurrent Process (cont.)
Concurrency: Mutual Exclusion and Synchronization
Programming with Shared Memory
Synchronization Lecture 23 – Fall 2017.
Midterm review: closed book multiple choice chapters 1 to 9
Threading And Parallel Programming Constructs
Critical section problem
Background and Motivation
Dr. Mustafa Cem Kasapbaşı
Java Concurrency 17-Jan-19.
Concurrency: Mutual Exclusion and Process Synchronization
Programming with Shared Memory
Java Concurrency.
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Java Concurrency.
Programming with Shared Memory Specifying parallelism
Java Concurrency 29-May-19.
CSE 153 Design of Operating Systems Winter 2019
CSE 542: Operating Systems
Synchronization and semaphores
Presentation transcript:

Shared Memory Programming Michael Cormier

Memory in Message-Passing Computers What we've been using P1 M1 P2 M2 P3 M3 Cluster P4 M4 P5 M5

Memory in Shared Memory Systems The actual topic of the lecture P1 P2 P3 Shared Memory P4 P5

Systems With Shared Memory Where is this stuff, anyway? Single system --Special-purpose systems --Off-the-shelf hardware (Core 2 Quad, etc.)‏ --This lecture assumes this type Distributed --Makes a cluster look like a single machine, as above --Can use hardware or software --Tune in next class for more on these

Advantages of Shared Memory Systems Why do we care? --Eliminates data transfer costs (except in distributed shared memory systems)‏ --Common in desktops --Can use existing thread/process methods to write programs

Disadvantages of Shared Memory Systems What's the catch? --Can be difficult to maintain memory consistency --Large numbers of processors can have different memory access times --Can't just add another node as in a cluster

Programming in Shared Memory Systems These aren't as funny to you as they are to me, are they? --Two key requirements --Consistent --Deadlock-free --Many ways of implementing parallelism --Processes --Threads --Specialized languages --Many versions of each of the above

Consistency and Deadlocking Keeping things straight --Consistency --Processes accessing shared variables can store incorrect values Process A Process B load x load x x = 4 x = 5 store x store x --What is x? --Must avoid this situation (race condition)‏ --Deadlock --Process A is waiting for process B, but process B is waiting for process A

Locks ...And throw away the key! --Method of ensuring consistency --Spin locks: Processes continue to loop until they can enter the critical section --Effective, but inefficient --Descheduling: A blocked process is halted until it can resume --Eliminates busy waiting --Complex, and has overhead in changing processes --Must decide which properties are most important

Semaphores SOS! SOS! --Another method for synchronization --Integer variable that is tested, waited for, and incremented atomically --P(s): Wait until s > 0 --V(s): Increment s --Can be used for mutual exclusion --Can be used to control access to buffers, etc. --Can be used to control order of execution

Monitors Not the lizards, the other kind --Semaphores are versatile but prone to error in practice --Monitors are intended to make it easier to synchronize processes --Only one process can be executing any method that a monitor applies to at one time --Can be inefficient: assume F1 accesses x and y, F2 accesses x, and F3 accesses y. They are all under the same monitor because F1's access must be protected. F2 and F3 do not conflict, but they cannot run simultaneously either

Bernstein's Conditions They may be boring but they're important --Used to determine if two processes can be safely run in parallel --Two processes Pi and Pj can be run in parallel if and only if None of the inputs to Pi are outputs of Pj None of the inputs to Pj are outputs of Pi None of the outputs of Pi are outputs of Pj None of the outputs of Pj are outputs of Pi

Deadlock Just about as bad as it sounds --Occurs when two or more processes are waiting for each other to finish before continuing --Program ceases to function --Must design programs carefully to avoid deadlock

UNIX Processes Luke! Use the fork()! --Operating system feature --Makes a copy of the parent process' memory at creation --Shared memory must be explicitly created --Much slower to create a new process than a new thread --Can be very useful, if the OS supports them --Most other operating systems have processes of some sort

Threads Nice threads, man! --Threads can be created much faster than processes --PThreads --C library for thread support --Standardized in POSIX-compliant OSes --Java threads --Created with runnable objects --Portable

Parallel Languages What would perpendicular languages be like? --Constructs: --Declaring a variable to be shared --Executing a list of statements in parallel --forall: Executing iterations of a for loop in parallel --Ada --Developed by the US Dept. of Defense --Not widely adopted outside military applications --OpenMP --Uses compiler directives and an existing sequential programming language --Thread-based, but uses fork/join commands like processes --Includes easy-to-use synchronization constructs --Many other constructs are implemented