Chien-Chung Shen CIS/UD

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Implementations: User-level Kernel-level User-level threads package each u.process defines its own thread policies! flexible mgt, scheduling etc…kernel.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Intertask Communication and Synchronization In this context, the terms “task” and “process” are used interchangeably.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
1 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
Concurrency. What is Concurrency Ability to execute two operations at the same time Physical concurrency –multiple processors on the same machine –distributing.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Chapter 39 Virtsualization of Storage: File and Directory Chien-Chung Shen CIS, UD
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Process Concept Process – a program.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
CS212: OPERATING SYSTEM Lecture 2: Process 1. Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
Lecture 15 Semaphore & Bugs. Concurrency Threads Locks Condition Variables Fixing atomicity violations and order violations.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Practice Chapter Five.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Semaphores Reference –text: Tanenbaum ch
Chapter 39 File and Directory Chien-Chung Shen CIS/UD
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Web Server Architecture Client Main Thread for(j=0;j
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chien-Chung Shen CIS/UD
Interprocess Communication Race Conditions
CS703 - Advanced Operating Systems
Process Management Process Concept Why only the global variables?
Chapter 3: Process Concept
Semaphores Reference text: Tanenbaum ch
Topics Covered What is Real Time Operating System (RTOS)
Background on the need for Synchronization
Process Synchronization
Chapter 3: Processes.
Operating Systems CMPSC 473
HW1 and Synchronization & Queuing
Threads and Locks.
Applied Operating System Concepts
Chien-Chung Shen CIS/UD
Lecture 2: Processes Part 1
Chapter 26 Concurrency and Thread
Midterm review: closed book multiple choice chapters 1 to 9
Threading And Parallel Programming Constructs
Process Synchronization
Operating System Concepts
Process Description and Control
Chapter 30 Condition Variables
Process Description and Control
Semaphores Chapter 6.
Concurrency: Mutual Exclusion and Process Synchronization
Process Description and Control
Process Description and Control
Process Description and Control
Chien-Chung Shen CIS/UD
Basics of Distributed Systems
Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads
Chapter 3: Process Concept
Semaphores Reference text: Tanenbaum ch
Monitors and Inter-Process Communication
Chapter 3: Process Management
Presentation transcript:

Chien-Chung Shen CIS/UD cshen@udel.edu Final Exam Review Chien-Chung Shen CIS/UD cshen@udel.edu

Zombie When a parent process does not (get the chance to) do a wait() call for its children, the children will become zombie processes (marked by <defunct> in Linux or Z) when they exit /usa/cshen/public_html/361/Proj_3/zombie.c

Scheduling Time quantum (slice) Context switching overhead CPU time of process Multi-level feedback queue zyBooks: Exercises 3.3.1 – 3.3.4

Course-grained Locking int count; int salary; // shared var’s mutex_lock A; thread 1 thread 2 lock(A); lock(A); count+=1; count+=2; salary+=50; salary+=70; unlock(A); unlock(A); count+=1; salary+=50; count+=2; salary+=70; How to allow more threads to execute (more) different critical sections at the same time? How to increase concurrency? count+=1; salary+=50; count+=2; salary+=70;

Fine-grained Locking count+=1; salary+=50; count+=2; salary+=70; int count; int salary; // shared var’s mutex_lock thread 1 thread 2 lock(A); lock(A); count+=1; count+=2; unlock(A); unlock(A); lock(B); lock(B); salary+=50; salary+=70; unlock(B); unlock(B); A, B; allow more threads to execute (more) different critical sections at the same time count+=1; salary+=50; count+=2; salary+=70;

Semaphore Usage: Summary Mutual exclusion: binary semaphore as mutex lock Controlled access to a given resource consisting of a finite number of instances: counting semaphore semaphore is initialized to the number of instances available Synchronization: two concurrent running threads T1 and T2 with statements S1 and S2, respectively require S2 be executed only after S1 has completed (on one CPU) Semaphore s = 0; T1: S1; signal(s); T2: wait(s); S2; S1 S2 T1 T2

Two threads on one CPU ➡️ either thread could run first initially empty int sem_wait(sem_t *s) { // P s--; if (s < 0) sleep; } int sem_post(sem_t *s) { // V s++; if (threads waiting) wake one up; P buffer C empty = 1 full = 0 What does P care (when will P have to wait)? P waits for buffer to become empty in order to put data into it does it have to wait initially? what should P do after putting data in? notify C that buffer is full What does C care (when will C have to wait)? C waits for buffer to become full (filled) before getting data what should C do after getting data? notify P that buffer is empty sem_wait(&empty); P put(i); sem_post(&full); sem_wait(&full); tmp = get(); C When P looks at the buffer (initially empty), what does it care? Or when will P have to wait? every time it has to wait for something, use a semaphore Does it have to wait initially? When C looks at the buffer (initially empty), what does it care? Or when will C have to wait? sem_post(&empty); Two threads on one CPU ➡️ either thread could run first

Efficiency and Concurrency Behavior of bounded-buffer producer consumer system on a single CPU? Essentially a stop-and-wait protocol (one put followed by one get) many context switching between threads How to increase its efficiency (or reduce # of context switches)? How to increase concurrency (parallelism)? multiple producers and multiple consumers P buffer C

Single P/C + Multiple Slots empty = N = ? full = 0 = ?

Rendezvous via Semaphore /usa/cshen/361/OSTEP/Chap31/HW-Threads-RealSemaphores/r.c

$ ln Chapter3 Chapter3.hard $ ls –il (show attributes of files) two names for the same file $ ln Chapter3 Chapter3.hard $ ls –il (show attributes of files)

Get Info about Files [cisc361:/usa/cshen/361 1077] echo hello > foo [cisc361:/usa/cshen/361 1078] more foo hello [cisc361:/usa/cshen/361 1079] stat foo   File: 'foo'   Size: 6         Blocks: 3          IO Block: 1048576 regular file Device: 29h/41d Inode: 73985       Links: 1 Access: (0664/-rw-rw-r--)  Uid: ( 4157/   cshen)   Gid: ( 4157/   cshen) Access: 2018-12-02 23:06:59.243498589 -0400 Modify: 2018-12-02 23:08:42.979511362 -0400 Change: 2018-12-02 23:08:42.979511362 -0400 Birth: - [cisc361:/usa/cshen/361 1080] ls -i foo 73985 foo [cisc361:/usa/cshen/361 1081]  All info of each file is stored in the inode (persistent) structure

Soft/Symbolic Links $ ln –s Chapter3 Chapter3.soft Soft link is a file itself containing “pathname” for the file that the link file is a symbolic link to 3 files types regular file (-) directory (d) symbolic link (l)

Symbolic (Soft) Links A symbolic link is actually a file itself, of a different type, containing the pathname of the linked-to file d: directory -: regular file l: symbolic link Possible of dangling reference

Fundamental Issues Since we cannot count on simultaneous observations of global states in distributed systems, we need to find a property on which we can depend Distributed systems are causal the cause precedes the effect sending of a message precedes the receipt of the message

time Space-Time diagram p1 and r4? p3 and q3? concurrent causal

Lamport Timestamps Example Events occurring at three processors local logical clocks are initialized to 0 2 3 7 1 6 4 3 1 5