Readers and Writers An introduction to the Linux programming interface for using UNIX semaphores.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

– R 7 :: 1 – 0024 Spring 2010 Parallel Programming 0024 Recitation Week 7 Spring Semester 2010.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
XSI IPC Message Queues Semaphores Shared Memory. XSI IPC Each XSI IPC structure has two ways to identify it An internal (within the Kernel) non negative.
CS 5704 Fall 00 1 Monitors in Java Model and Examples.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
Classic Synchronization Problems
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Day 14 Concurrency. Software approaches Programs must be written such that they ensure mutual exclusion. Petersons and Dekkers (Appendix B)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Enforcing Mutual Exclusion, Semaphores. Four different approaches Hardware support Disable interrupts Special instructions Software-defined approaches.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
© 2006 RightNow Technologies, Inc. Tribute to Synchronization Jeff Elser December 11 th, 2006.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Introduction to Synchronization CS-3013 C-term Introduction to Synchronization CS-3013 Operating Systems (Slides include materials from Operating.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Introduction to Synchronization CS-3013 A-term Introduction to Synchronization CS-3013 Operating Systems (Slides include materials from Modern Operating.
Introduction to Synchronization CS-3013 A-term Introduction to Synchronization CS-3013, Operating Systems A-term 2009 (Slides include materials from.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
1 Interprocess Communication Race Conditions Two processes want to access shared memory at same time.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Semaphores Questions answered in this lecture: Why are semaphores necessary? How are semaphores used for mutual exclusion? How are semaphores used for.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Semaphores. Readings r Silbershatz: Chapter 6 Mutual Exclusion in Critical Sections.
1 Chapter 6 Interprocess Communications. 2 Contents u Introduction u Universal IPC Facilities u System V IPC.
Semaphores and Bounded Buffer Andy Wang Operating Systems COP 4610 / CGS 5765.
Thread Synchronization with Semaphores
S -1 Shared Memory. S -2 Motivation Shared memory allows two or more processes to share a given region of memory -- this is the fastest form of IPC because.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
Semaphores and Bounded Buffer. Semaphores  Semaphore is a type of generalized lock –Defined by Dijkstra in the last 60s –Main synchronization primitives.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
1 Semaphores Chapter 7 from Inter-process Communications in Linux: The Nooks & Crannies by John Shapley Gray Publisher: Prentice Hall Pub Date: January.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
Semaphores Creating and Accessing Semaphore Sets Semaphore Operations
IPC Programming. Process Model Processes can be organized into a parent-child hierarchy. Consider the following example code: /* */
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.
Semaphores Chapter 7 from Inter-process Communications in Linux:
© 2006 RightNow Technologies, Inc. Synchronization September 15, 2006 These people do not actually work at RightNow.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 13: Condition Variable, Read/Write Lock, and Deadlock.
Chapter 7 - Interprocess Communication Patterns
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Operating System Concepts and Techniques Lecture 14 Interprocess communication-3 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Agenda  Protecting Shared Resources (Critical Sections of Programming Code): Problem of Shared read/write files Solution: Files Locks: Read Locks Write.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
CS 311/350/550 Semaphores. Semaphores – General Idea Allows two or more concurrent threads to coordinate through signaling/waiting Has four main operations.
Monitors Chapter 7.
Inter Process Communication
Threading And Parallel Programming Constructs
Synchronization and Semaphores
Monitors Chapter 7.
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
Monitors Chapter 7.
Thread Synchronization including Mutual Exclusion
| Website for Students | VTU -NOTES -Question Papers
CSE 153 Design of Operating Systems Winter 2019
CSCI1600: Embedded and Real Time Software
Presentation transcript:

Readers and Writers An introduction to the Linux programming interface for using UNIX semaphores

Proberen and Verlagen Dijkstra’s invention (1965) Variable s is initialized to a positive value P(s) tests s and, if positive, decrements s; otherwise sleeps until s becomes positive V(s) increments s Can be used to achieve mutual exclusion Or to enforce an upper limit on the number of simultaneous accesses to a resource

Wait and Signal Dijkstra’s P(s) and V(s) operations often are called ‘semWait()’ and ‘semSignal()’ struct semaphore { int count; queue_t q; }; void semWait( struct semaphore s ) { if ( --s.count < 0 ) { block( task ); enqueue( task, s.q ); } } void semSignal( struct semaphore s ) { if ( ++s.count >= 0 ) { dequeue( task, s.q ); unblock( task ); } }

Readers and Writers A data buffer is shared among a number of separate processes Some processes want to read the buffer, and other processes want to write to the buffer Multiple readers may read it simultaneously Only one writer at a time can write to the buffer While a writer is writing to the buffer, no reader may read from it

Semaphore sets You use the ‘semget()’ function to create a new semaphore set within the kernel (and aquire a ‘handle’ to it for later reference) You use the ‘semctl()’ function to inspect or modify the semaphore value(s) for the semaphores in your semaphore set Use ‘semat()’ and ‘semdt()’ to attach and detach the semaphore set to your process

UNIX kernel structures semval sempid semncnt semznt struct sem current value of the semaphore counter pid of the most recent task to access it counts tasks awaiting semval >= current counts tasks awaiting semval == 0

A ‘semaphore set’ descriptor sem_perm *sem_base sem_nsems sem_otime sem_ctime struct semid_ds nsems = 3 struct sem array[ ]

Semaphore ‘actions’ You use the ‘semop()’ function to request the kernel to perform one or more ‘actions’ on semaphores in your semaphore set Such actions will be performed atomically The types of actions include incrementing and decrementing a semaphore’s counter, as well as ‘waiting’ until the semaphore’s value is increased

A client-and-server demo Our ‘lottery.cpp’ program implements a ‘server’ which writes to shared memory Our ‘gambler.cpp’ program implements a ‘client’ which reads from shared memory A semaphore set with two semaphores is used for coordinating the reads and writes

In-class exercise Can you implement a modification of the ‘gambler.cpp’ client-program which forks multiple separate reader-processes using randomly generated ‘bets’ in order to try and ‘win’ the lottery (i.e., by guessing the winning number-combination)? How many times will you need to ‘fork()’ in order to have a better-than-even chance at winning?