CS444/CS544 Operating Systems Introduction to Synchronization 2/07/2007 Prof. Searleman

Slides:



Advertisements
Similar presentations
Processes Management.
Advertisements

Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Mutual Exclusion.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Silberschatz, Galvin and Gagne ©2007 Processes and Their Scheduling.
Review: Chapters 1 – Chapter 1: OS is a layer between user and hardware to make life easier for user and use hardware efficiently Control program.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2007 Prof. Searleman
Avishai Wool lecture Priority Scheduling Idea: Jobs are assigned priorities. Always, the job with the highest priority runs. Note: All scheduling.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
CS444/CS544 Operating Systems Scheduling 2/05/2007 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/21/2007 Prof. Searleman
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
CS444/CS544 Operating Systems CPU Scheduling 2/07/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/14/2007 Prof. Searleman
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Process Concept Process – a program.
Chapter 41 Processes Chapter 4. 2 Processes  Multiprogramming operating systems are built around the concept of process (also called task).  A process.
Scheduling Strategies Operating Systems Spring 2004 Class #10.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
5: CPU Scheduling Last Modified: 10/25/2015 8:16:31 PM.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Process by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz and Galvin  Chapter 3:Processes Processes –State of a process, process control block, –Scheduling of processes  Long term scheduler,
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
CS333 Intro to Operating Systems Jonathan Walpole.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 3.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Synchronization.
Chapter 6: CPU Scheduling
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 2: The Linux System Part 3
Chapter5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Lecture 18 Syed Mansoor Sarwar
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Chapter 3: Process Concept
Chapter 6: CPU Scheduling
CSE 153 Design of Operating Systems Winter 2019
Module 5: CPU Scheduling
Presentation transcript:

CS444/CS544 Operating Systems Introduction to Synchronization 2/07/2007 Prof. Searleman

CS444/CS544 Spring 2007 CPU Scheduling Synchronization Need for synchronization primitives Locks and building locks from HW primitives Reading assignment: Chapter 6 HW#4HW#4 posted, due: Exam#1Exam#1: Thurs. Feb. 15 th, 7:00 pm, Snell 213

Multi-level Feedback Queues (MLFQ) Multiple queues representing different types of jobs Example: I/O bound, CPU bound Queues have different priorities Jobs can move between queues based on execution history If any job can be guaranteed to eventually reach the top priority queue given enough waiting time, then MLFQ is starvation free

5.4 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts 5.06 Multi-level Queues (MLQ)

5.5 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts 5.07 Multi-level Feedback Queues (MLFQ)

Real Time Scheduling Real time processes have timing constraints Expressed as deadlines or rate requirements Common Real Time Scheduling Algorithms Rate Monotonic Priority = 1/RequiredRate Things that need to be scheduled more often have highest priority Earliest Deadline First Schedule the job with the earliest deadline Scheduling homework? To provide service guarantees, neither algorithm is sufficient Need admission control so that system can refuse to accept a job if it cannot honor its constraints

Multiprocessor Scheduling Can either schedule each processor separately or together One line all feeding multiple tellers or one line for each teller Some issues Want to schedule the same process again on the same processor (processor affinity) Why? Caches Want to schedule cooperating processes/threads together (gang scheduling) Why? Don’t block when need to communicate with each other

Algorithm Evaluation: Deterministic Modeling Deterministic Modeling Specifies algorithm *and* workload Example : Process 1 arrives at time 1 and has a running time of 10 and a priority of 2 Process 2 arrives at time 5, has a running time of 2 and a priority of 1 … What is the average waiting time if we use preemptive priority scheduling with FIFO among processes of the same priority?

Algorithm Evaluation: Queueing Models Distribution of CPU and I/O bursts, arrival times, service times are all modeled as a probability distribution Mathematical analysis of these systems To make analysis tractable, model as well behaved but unrealistic distributions

Algorithm Evaluation: Simulation Implement a scheduler as a user process Drive scheduler with a workload that is either randomly chosen according to some distribution measured on a real system and replayed Simulations can be just as complex as actual implementations At some level of effort, should just implement in real system and test with “real” workloads What is your benchmark/ common case?

Synchronization

Concurrency is a good thing So far we have mostly been talking about constructs to enable concurrency Multiple processes, inter-process communication Multiple threads in a process Concurrency critical to using the hardware devices to full capacity Always something that needs to be running on the CPU, using each device, etc. We don’t want to restrict concurrency unless we absolutely have to

Restricting Concurrency When might we *have* to restrict concurrency? Some resource so heavily utilized that no one is getting any benefit from their small piece too many processes wanting to use the CPU (while (1) fork) “thrashing” Solution: Access control (Starvation?) Two processes/threads we would like to execute concurrently are going to access the same data One writing the data while the other is reading; two writing over top at the same time Solution: Synchronization (Deadlock?) Synchronization primitives enable SAFE concurrency

Correctness Two concurrent processes/threads must be able to execute correctly with *any* interleaving of their instructions Scheduling is not under the control of the application writer Note: instructions != line of code in high level programming language If two processes/threads are operating on completely independent data, then no problem If they share data, then application programmer may need to introduce synchronization primitives to safely coordinate their access to the shared data/resources If shared data/resources are read only, then also no problem

Illustrate the problem Suppose we have multiple processes/threads sharing a database of bank account balances. Consider the deposit and withdraw functions int withdraw (int account, int amount) { balance = readBalance(account); balance = balance – amount; updateBalance(account, balance); return balance; } int deposit (int account, int amount) { balance = readBalance(account); balance = balance + amount; updateBalance(account, balance); return balance; } What happens if multiple threads execute these functions for the same account at the “same” time? Notice this is not read-only access

Example Balance starts at $500 and then two processes withdraw $100 at the same time Two people at different ATMs; Update runs on the same back-end computer at the bank What could go wrong? Different Interleavings => Different Final Balances !!! int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; } int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; }

$500 - $100 - $100 = $400 ! If the second does readBalance before the first does writeBalance……. Two examples: Before you get too happy, deposits can be lost just as easily! balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); $500 $400

Race condition When the correct output depends on the scheduling or relative timings of operations, you call that a race condition. Output is non-deterministic To prevent this we need mechanisms for controlling access to shared resources Enforce determinism

Synchronization Required Synchronization required for all shared data structures like Shared databases (like of account balances) Global variables Dynamically allocated structures (off the heap) like queues, lists, trees, etc. OS data structures like the running queue, the process table, … What are not shared data structures? Variables that are local to a procedure (on the stack) Other bad things happen if try to share pointer to a variable that is local to a procedure

Critical Section Problem do { ENTRY_SECTION critical section /* access shared data */ EXIT_SECTION remainder section /* safe */ } Model processes/threads as alternating between code that accesses shared data (critical section) and code that does not (remainder section) ENTRY_SECTION requests access to shared data ; EXIT_SECTION notifies of completion of critical section