Communication and Synchronization of concurrent tasks

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Page 1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
CPSC 4650 Operating Systems Chapter 6 Deadlock and Starvation
Posix Message Queues Courtesy of W. Richard Stevens Unix Network Programming Volume 2: Interprocess Communication.
Chapter 11 Operating Systems
1 Concurrency: Deadlock and Starvation Chapter 6.
The Structure of the “THE” -Multiprogramming System Edsger W. Dijkstra Jimmy Pierce.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Inter-Process Communication Mechanisms CSE331 Operating Systems Design.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
The University of Adelaide, School of Computer Science
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
4061 Session 23 (4/10). Today Reader/Writer Locks and Semaphores Lock Files.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
2.3 interprocess communcation (IPC) (especially via shared memory & controlling access to it)
System Components ● There are three main protected modules of the System  The Hardware Abstraction Layer ● A virtual machine to configure all devices.
Linux File system Implementations
File Systems cs550 Operating Systems David Monismith.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Lecture 6: Monitors & Semaphores. Monitor Contains data and procedures needed to allocate shared resources Accessible only within the monitor No way for.
Threads-Process Interaction. CONTENTS  Threads  Process interaction.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Barriers and Condition Variables
Operating System Concepts and Techniques Lecture 13 Interprocess communication-2 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
C Programming Day 2. 2 Copyright © 2005, Infosys Technologies Ltd ER/CORP/CRS/LA07/003 Version No. 1.0 Union –mechanism to create user defined data types.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Big Picture Lab 4 Operating Systems C Andras Moritz
Chapter 3: Process Concept
Operating systems Deadlocks.
CSNB334 Advanced Operating Systems 4
Background on the need for Synchronization
Concurrency: Mutual Exclusion and Synchronization
Operating systems Deadlocks.
Process Description and Control
Threads Chapter 4.
Concurrency: Mutual Exclusion and Process Synchronization
Multithreading Tutorial
Process Description and Control
Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads
Chapter 6: Synchronization Tools
CSE 451 Section 1/27/2000.
Presentation transcript:

Communication and Synchronization of concurrent tasks Unit VI Communication and Synchronization of concurrent tasks

Communication: Synchronisation the passing of information from one task to another Synchronisation the satisfaction of constraints on the interleaving of the actions of tasks

Communication requires synchronisation; Synchronisation can be considered as content less communication. Communication is also considered as Cooperation

Dependency Relationships A → B A ←B A↔ B A NULL B

posix_queue object is used to communicate between processes. An interface to the POSIX message queue, a linked list of strings. Contains the names of the files that the worker processes are to search to find the code.

Example of unidirectional dependency Threads can also communicate with other threads within the address space of their process by using global variables and data structures. If two threads wanted to pass data between them, thread A would write the name of the file to a global variable, and thread B would simply read that variable.

Example of bidirectional dependency Two First - In, First - Out (FIFO) pipes. A pipe is a data structure that forms a communication channel between two processes.

Cooperation dependency Task A requires a resource that Task B owns and Task B must release the resource before Task A can use it Example: write access to the posix_queue

Counting Tasks Dependencies Consider three threads A, B, and C Let n is the number of threads and k is the number of threads involved in the dependency. Possible threads involved in dependency= C( n , k ) =

Each combination can be considered as a simple graph. An adjacency matrix is used to represent dependency relationships for two – thread combinations. An adjacency matrix is a graph G = ( V , E ) in which V is the set of vertices or nodes of the graph and E is the set of edges such that: A ( i , j ) = 1 if ( i , j ) is an element of E = 0 otherwise A ( i , j ) < > A( j , i )

Consider tasks A,B, and C

C- Communication Co- Cooperation

Unified Modeling Language (UML) dependency

Interprocess Communication A process sends data to another process or makes another process aware of an event by means of operating system APIs,

Persistence of IPC The persistence of an object refers to the existence of an object during or beyond the execution of the program, process, or thread that created it.

Storage class specifies how long an object exists during the execution of a program. Automatic static dynamic.

IPC entities reside in the filesystem, in kernel space, or in user space Persistence is concerned with the existence of the object

An IPC object with filesystem persistence exists until the object is deleted explicitly. If the kernel is rebooted, the object will keep its value. Kernel persistence defines IPC objects that remain in existence until the kernel is rebooted or the object is deleted explicitly. An IPC object with process persistence exists until the process that created the object closes it.

Environment Variables & Command - Line Arguments Environment variables store system - dependent information such as paths to directories that contain commands, libraries, functions, and procedures used by a process.

int posix_spawn(pid_t. restrict pid, const char int posix_spawn(pid_t *restrict pid, const char *restrict path, const posix_spawn_file_action *file_actions, const posix_spawnattr_t *restrict attrp, char *const argv [restrict ], char *const envp [restrict ]);

Files simplest and most flexible means of transferring or sharing data.

Steps in the file - transferring process: The name of the file has to be communicated. You must verify the existence of the file. Be sure that the correct permissions are granted to access to the file. Open the file. Synchronize access to the file. While reading/writing to the file, check to see if the stream is good and that it ’ s not at the end of the file. Close the file.

Shared Memory Using POSIX Shared Memory The shared memory maps: a file internal memory to the shared memory region

#include <sys/mman.h > void *mmap(void *addr, size_t len, int prot, int flags, int fd,off_t offset); int mumap(void *addr, size_t len);

fd =open(file_name ,O_RDWR); ptr =casting <type >(mmap(NULL,sizeof(type), PROT_READ, MAP_SHARED, fd, 0));

#include <sys/mman.h > int shm_open(const char *name, int oflag, mode_t mode); int shm_unlink(const char *name); oflag is a bit mask created by ORing together one of these flags: O_RDONLY or O_RDWR

fd =sh_open(memory _name ,O_RDWR,MODE); ptr =casting <type >(mmap(NULL,sizeof(type ), PROT_READ, MAP_SHARED, fd, 0)); use semaphores between processes: sem_wait(sem); ...*ptr; sem_post(sem);

Pipes communication channels used to transfer data between processes. Anonymous Named (also called FIFO)

Named Pipes (FIFO) created with mkfifo(): #include <sys/types.h > #include <sys/stat.h > int mkfifo(const char *pathname, mode_t mode); int unlink(const char *pathname);

Program that creates a named pipe using namespace std; #include <iostream > #include <fstream > #include <sys/wait.h > #include <sys/types.h > #include <sys/stat.h >

int main(int argc,char. argv [],char int main(int argc,char *argv [],char *envp []) { fstream pipe; if(mkfifo(“Channel-one ”,S_IRUSR |S_IWUSR|S_IRGRP|S_IWGRP)==-1) cerr <<“could not make fifo ” <<<endl; } pipe.open(“Channel-one ”,ios::out); if(Pipe.bad()) cerr <<“could not open fifo ” <<<endl; else pipe <<“2 3 4 5 6 7 “ <<<endl; return(0);

while(. pipe. eof() &&p ipe while(!pipe.eof() &&p ipe.good()) { getline(pipe,Input); cout <<Input <<endl; } pipe.close(); unlink(“Channel-one ”);

FIFO basic components Input/output port Insertion and extraction operation Creation/initialization operation Buffer creation, insertion, extraction, destruction

Message Queue is a linked list of strings or messages. Each message in the queue has these attributes: A priority The length of the message The message or data

#include <mqueue.h > mqd_t mq_open(const char *name, int oflag, mode_t mode, struct mq_attr *attr); int mq_close(mqd_t mqdes); int mq_unlink(const char *name);

Interthread Communications Communication between threads is used to: Share data Send a message

Types of ITC Global data, variables, and data structures Declared outside of the main function or have global scope. Any modifications to the data are instantly accessible to all peer threads.

Parameters Parameters passed to threads during creation. The generic pointer can be converted any data type.

File handles Files shared between threads. These threads share the same read-write pointer and offset of the file.

Synchronizing Concurrency sharable software resources are: Applications Programs Utilities

Types of Synchronization Data Necessary to prevent race conditions. It allows concurrent threads/processes to access a block of memory safely. Hardware Necessary when several hardware devices are needed to perform a task or group of tasks. It requires communication between tasks and tight control over real-time performance and priority settings. Task Necessary to prevent race conditions. It enforces preconditions and postconditions of logical processes.

Critical sections is an area or block of code that accesses a shared resource . must be controlled as it is being shared by multiple concurrent tasks.

Conditions while sharing a resource If a task is in its critical section, other tasks sharing the resource cannot be executing in their critical section. They are blocked. This is called mutual exclusion . If no tasks are in their critical section, then any blocked tasks can now enter their critical section. This is called progress . There should be a bounded wait as to the number of times that a task is allowed to reenter its critical sections. A task that keeps entering its critical sections may prevent other tasks from attempting to enter theirs. A task cannot reenter its critical sections if other tasks are waiting in a queue.

PRAM Model Parallel Random - Access Machine model in which there are N processors labeled P1, P2, P3, . . . PN that share one global memory.

exclusive read and write algorithms To access the shared global memory concurrent read and write algorithms exclusive read and write algorithms

Concurrent and Exclusive Memory Access Exclusive Read and Exclusive Write (EREW) Concurrent Read and Exclusive Write (CREW) Exclusive Read and Concurrent Write (ERCW) Concurrent Read and Concurrent Write (CRCW)

Relationships between Cooperating Tasks Start - to - start (SS) Task B cannot start until task A starts. Finish - to - start (FS) Task A cannot finish until Task B starts. Start - to - finish (SF) Task A cannot start until Task B finishes Finish - to - finish (FF) Task A cannot finish until Task B finishes.

Synchronization Mechanisms Semaphores and mutexes Read - write locks Condition variables

Semaphore A synchronization mechanism that is used to manage synchronization relationships and implement access policies. A special kind of variable that can be accessed only by very specific operations.

Basic Semaphore Operations P() operation --- decrements the semaphore P(Mutex) if(Mutex >0){ Mutex--; } else { Block on Mutex; V()operation --- increments the semaphore V(Mutex) if(Blocked on Mutex N processes){ pass on Mutex; else{ Mutex++; lock() wait() own() unlock() post() unown()

Types of semaphores A binary semaphore has the value 0 or 1. The semaphore is available when its value is 1 and not available when it is 0. A counting semaphore has some non - negative integer value. Its initial value represents the number of resources available.

Posix Semaphores defines a named binary semaphore. The name corresponds to a pathname in the filesystem.

Basic Semaphore operations Initialization – Allocates memory required to hold the semaphore and give memory initial values. Determines whether the semaphore is private, sharable, owned, or unowned.

Request ownership – Makes a request to own the semaphore. If the semaphore is owned by a thread, then the thread blocks.

Release ownership --- Releases the semaphore so it is accessible to blocked threads.

Try ownership --- Tests the ownership of the semaphore. If the semaphore is owned, the requester does not block but continues executing. Can wait for a period of time before continuing.

A process using a semaphore on an output file.

Mutex Semaphores Mutex means mutual exclusion . A mutex is a type semaphore, pthread_mutex_t It must always be unlocked by the thread that has locked it. With a semaphore, a post (or unlock) can be performed by a thread other than the thread that performed the wait (or unlock).

Condition Variables A mutex allows tasks to synchronize by controlling access to the shared data. A condition variable allows tasks to synchronize on the value of the data. --- pthread_cond_t Condition variables are semaphores that signal when an event has occurred.

Types of operations of conditional variables Initialize Destroy Wait Timed wait Signal Broadcast

Thread Strategy Approaches The approach determines how the threaded application delegates its works to the tasks and how communication is performed. A strategy supplies a structure and approach to threading and helps in determining the access policies.

Threads are given work according to a specific strategy or approach. If the application models some procedure or entity, then the approach selected should reflect that model.

The common models Delegation (boss - worker) Peer - to - peer Pipeline Producer - consumer

Delegation Model Boss thread: Worker threads: Create all the threads Place work in the queue Awaken worker threads when work is available Worker threads: Check the request in the queue Perform the assigned task Suspend itself if no work is available

Peer - to - Peer Model All the threads have an equal working status. There is a single thread that initially creates all the threads needed to perform all the tasks But that thread is still considered a worker thread.

Producer - Consumer Model There is a producer thread that produces data to be consumed by the consumer thread . The data is stored in a block of memory shared between the producer and consumer threads.

Pipeline Model It is an assembly - line approach in which a stream of items is processed in stages. At each stage, work is performed on a unit of input by a thread. Once data has been processed at a certain stage, it is ready to process the next data in the stream. Each thread is responsible for producing its interim results or output and making them available to the next stage in the pipeline.

SPMD and MPMD for Threads

SISD

STMD

MTSD

MTMD

Decomposition and Encapsulation of Work

Example: Consider a multitude of text files that requires filtering. The text files have to be filtered in order to be used in our Natural Language Processing (NLP) system. We want to remove a specified group of tokens or characters from multiple text files, characters such as [, . ? ! ], and we want this done in real time.

The characters to be removed The resulting filtered files The objects that can be immediately identified are: Text files The characters to be removed The resulting filtered files

Approach 1 Search the file for a character. When it is, found remove it, and then search for the next occurrence of the character. When all of those characters have been removed, search the file again for the next unwanted character. Repeat this for each file. The postcondition is met because we are working on the original file and removing the unwanted characters from it.

Approach 2 Remove all occurrences of a single character from each file. Repeat this process for each unwanted character. The postcondition is met in the same way as in Approach 1.

Approach 3 Read in a single line of text, remove an unwanted character. Go through the same line of text and remove the next unwanted character, and so on. When all characters have been removed from the line of text, write the filtered line of text to the new file. This is done for each file. The postcondition is met because we are restructuring a new file as we go. As a line is processed, it is written to the new file.

Approach 4 Same as Approach 3, but we remove only a single unwanted character from a line of text and then write it to a file or container. Once the whole file has been processed, it is reprocessed for the next character. When the last character has been removed, the file has been filtered. If the text is in a container, it can now be written to a file. This is repeated for each file. The container becomes important in restructuring the file.