Midterm Review. Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 –

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Basic Operating System Concepts
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
© 2004, D. J. Foreman 1 Deadlock. © 2004, D. J. Foreman 2 Example  P1 requests most of real memory  Disk block mgr is swapped out ot make room for P1's.
Review: Chapters 1 – Chapter 1: OS is a layer between user and hardware to make life easier for user and use hardware efficiently Control program.
CMPT 300: Operating Systems I Dr. Mohamed Hefeeda
© 2004, D. J. Foreman 1 O/S Organization. © 2004, D. J. Foreman 2 Topics  Basic functions of an OS ■ Dev mgmt ■ Process & resource mgmt ■ Memory mgmt.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Dr. Mohamed Hefeeda.
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
Chapter 1 and 2 Computer System and Operating System Overview
1 Last Class: Introduction Operating system = interface between user & architecture Importance of OS OS history: Change is only constant User-level Applications.
1 Concurrency: Deadlock and Starvation Chapter 6.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Process Description and Control A process is sometimes called a task, it is a program in execution.
© 2004, D. J. Foreman 2-1 Concurrency, Processes and Threads.
Operating System Organization
1 OS & Computer Architecture Modern OS Functionality (brief review) Architecture Basics Hardware Support for OS Features.
1 Computer System Overview Chapter 1 Review of basic hardware concepts.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Chapter 41 Processes Chapter 4. 2 Processes  Multiprogramming operating systems are built around the concept of process (also called task).  A process.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Cpr E 308 Spring 2004 Real-time Scheduling Provide time guarantees Upper bound on response times –Programmer’s job! –Every level of the system Soft versus.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CSE 451: Operating Systems Section 5 Midterm review.
© 2004, D. J. Foreman 1 Implementing Processes and Threads.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Process Management Deadlocks.
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
Section 10: Last section! Final review.
Intro to Processes CSSE 332 Operating Systems
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Lecture Topics: 11/1 General Operating System Concepts Processes
Process Description and Control
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Process Description and Control
February 5, 2004 Adrienne Noble
Operating Systems: A Modern Perspective, Chapter 3
Deadlock © 2004, D. J. Foreman.
CS510 Operating System Foundations
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
O/S Organization © 2004, D. J. Foreman.
O/S Organization © 2004, D. J. Foreman.
CSE 153 Design of Operating Systems Winter 2019
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Midterm Review

Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques Part 4 – classic synchronization problems, examples, and algorithms Part 5 – Review of old exams

Abstract View of System © 2004, D. J. Foreman3 User Space O/S Space Application Programming Interface

Topics Basic functions of an OS Dev mgmt Process & resource mgmt Memory mgmt File mgmt Functional organization General implementation methodologies Performance Trusted software UNIX & WindowsNT organization © 2004, D. J. Foreman4

Design Constraints Performance Security Correctness Maintainability Cost and "sell-ability" Standards Usability © 2004, D. J. Foreman5

Resource Management Resources Memory CPU cycles I/O Includes networks, robot arms, motors That is, any means of getting information (or signals) into or out of the computer © 2004, D. J. Foreman6

Resource Sharing Why do we need to share? Greater throughput Lowers cost of resources Allows more resources to be available © 2004, D. J. Foreman7

Executing User Programs Batch Programming (olden days) Scheduled tasks Maximize throughput Multiprogramming (modern OS) Multiple user programs Timesharing Maximize response time

I/O Techniques Programmed I/O Processor repeatedly check I/O status register Interrupt-Driven I/O I/O interrupts processor when I/O is ready Processor interrupted and involved in every word of data in the Read/Write DMA Processor delegates the work to the I/O device I/O interrupts processor only upon completion

Memory Hierarchy registers <- L1 cache <- L2 cache <- main memory <- disk data moves up the hierarchy step-by-step access time slows down the processor Memory Access - Locality of reference - Temporal Locality: recently used locations - Spatial Locality: clustered locations

The Kernel Implements O/S functions Privileged, non-interruptible Sometimes reduced to a "micro-kernel" Absolutely minimal set of functions required to be in privileged mode Micro-kernel does NOT include : Device drivers File services Process server Virtual memory mgmt © 2004, D. J. Foreman11

Modes of Execution Processor modes Supervisor or Kernel mode User mode Supervisor or Kernel mode Can execute all machine instructions Can reference all memory locations User mode Can only execute a subset of instructions Can only reference a subset of memory locations © 2004, D. J. Foreman12

Modes-3 Mechanisms for getting into Kernel space Call to function that issues a "trap" or "supervisor call" instruction "Send" message to the Kernel Effectively issues a "trap" Interrupts H/W sets mode bit to 1 Next inst is in kernel at interrupt handler code No "call" or "send" required © 2004, D. J. Foreman13

Modes-4 system call example fork (My_fork_loc); { ● ● trap (FORK, *My_fork_loc); } My_fork_loc:…; K_fork(loc) { ● ● start_process( loc); mode=0; return; } © 2004, D. J. Foreman14 Trap table *K_fork K_fork is entry # "FORK" Kernel space

Interrupt Handler Saves user state IC Registers Stack Mode (kernel/user) Switches to device-handler Restores user's state Returns to user with interrupts enabled Might NOT be atomic Allows new interrupt before switching © 2004, D. J. Foreman15

Trap or Supervisor Call Instruction Atomic operation (4 parts) Memory protection Switches to privileged mode Sets the interrupt flag Sets IC to common interrupt handler in O/S 16

Key concepts CPU cycles are wasted during a wait Devices are independent Multitasking (or threading) is possible Why not overlap I/O with CPU Threads can decide when to wait Non-thread programs can also decide System throughput is increased © 2004, D. J. Foreman17

Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques Part 4 – classic synchronization problems, examples, and algorithms Part 5 – Review of old exams

Processes vs User Threads Processes Inter-process communication requires kernel interaction Switching between processes is more expensive Copy the PCB (identifier, state, priority, PC, memory pointers, registers, I/O status, open files, accounting information) PCB is larger: more expensive to create, switch, and terminate User Threads share address space and resources (code, data, files) Inter-thread communication does not require kernel interaction Switching between threads is less expensive No need to save/restore shared address space and resources Copy the TCB (identifier, state, stack, registers, accounting info) TCB is smaller; why? Less expensive all-around: to create, switch, and terminate

© 2004, D. J. Foreman20 Context Switching - 3 The actual Context Switch: Save all user state info: Registers, IC, stack pointer, security codes, etc Load kernel registers Access to control data structures Locate the interrupt handler for this device Transfer control to handler, then: Restore user state values Atomically: set IC to user location in user mode, interrupts allowed again

© 2004, D. J. Foreman21 Questions to ponder Why must certain operations be done atomically ? What restrictions are there during context switching? What happens if the interrupt handler runs too long? Why must interrupts be masked off during interrupt handling?

© 2004, D. J. Foreman2-22 Concurrency The appearance that multiple actions are occurring at the same time On a uni-processor, something must make that happen A collaboration between the OS and the hardware On a multi-processor, the same problems exist (for each CPU) as on a uni-processor

© 2004, D. J. Foreman23 The Problem Given: "i" is global i++; expands into: LDAi ADAi,1 STAi What if interrupt occurs DURING 1 or 2? This is a “Critical Section” Incorrect values of "i" can result How do we prevent such errors

© 2004, D. J. Foreman24 Strategies 1. User-only mode software 2. Disabling interrupts 3. H/W & O/S support

Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques Part 4 – classic synchronization problems, examples, and algorithms Part 5 – Review of old exams

Synchronization Concerns & Considerations What is the critical section? Who accesses it? Reads? Writes? Can there be race conditions? Is there an order for access? Can data be overwritten? Solutions must have: Mutual exclusion in the critical section Progress/No Deadlock No starvation

© 2004, D. J. Foreman27 System Approaches Prevention Avoidance Detection & Recovery Manual mgmt

© 2004, D. J. Foreman28 Conditions for Deadlock Mutual exclusion on R1 Hold R1 & request on R2 Circularity No preemption – once a R is requested, the request can't be retracted (because the app is now blocked! All 4 must apply simultaneously Necessary, but NOT sufficient

Semaphores Uses semWait/semSignal to coordinate access to the critical section An integer counter for each semaphore must be initialized and is used to coordinate semWait – decrements the counter, then is blocked if counter is < 0 semSignal – increments the counter and unblocks the next thread in the blocked queue The one who locks is not necessarily the one who unlocks – potential pitfall Can have more than one semaphore – more complex synchronization Those blocked-waiting are always queued

Binary Semaphores Counter can only be one or zero Access to the critical section is one at a time Similar to a mutex lock

Counting Semaphores Counter can be any integer at any time More complex synchronization Used for multiple concurrent threads Examples Prioritizing access to the critical section Tracking the bound-buffer in a Producer/Consumer model Multiple counting semaphores can be used to coordinate multiple Readers/Writers

Monitors Used to encapsulate synchronization management Private condition variables, semaphores, locks, etc Public interfaces Replace spaghetti semaphores with simple function calls provided by the monitor Producer/Consumer Example Create a C++ class The class has two public functions: append and take The condition variables and bound buffer are private data In the producer and consumer code, you need only call append or take, the monitor does the rest

Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques Part 4 – classic synchronization problems, examples, and algorithms Part 5 – Review of old exams

Bakery Algorithm While (true) { choosing[i] = true number[i] = 1 + max(number[],n) choosing[i] = false for (int j = 0; j < n; j++) { while (choosing[j]) { }; while ((number[j] != 0) && (number[j], j) < (number[i], i)) { }; } //critical section number[i] = 0; }

Dekker’s Algorithm flag[0], flag[1] = 0 turn = 1 //P0 while (true) { flag[0] = true while (flag[1]) { if (turn == 1) { flag[0] = false while (turn == 1) { } flag[0] = true } // critical section … turn = 1 flag[0] = false //P1 while (true) { flag[1] = true while (flag[0]) { if (turn == 0) { flag[1] = false while (turn == 0) { } flag[1] = true } // critical section … turn = 0 flag[1] = false

Peterson’s Algorithm flag[0], flag[1] = 0 //P0 while (true) { flag[0] = true turn = 1 while (flag[1] && turn == 1) { // critical section … flag[0] = false } //P1 while (true) { flag[1] = true turn = 0 while (flag[0] && turn == 0) { // critical section … flag[1] = false }

© 2004, D. J. Foreman37 The Banker's Algorithm maxc [ i, j ] is max claim for R j by p i alloc [ i, j ] is units of R j held by p i c j is the # of units of j in the whole system Can always compute avail [ j ] = c j -  0  i  n alloc [ i, j ] and hence R j available Basically examine and enumerate all transitions Classic avoidance algorithm

© 2004, D. J. Foreman 38 Banker's Algorithm - Steps 1& 2 // 4 resource types C=# avail= Compute units of R still available (C - col_sum) avail [0] = = 1 avail [1] = = 2 avail [2] = = 2 avail [3] = = 2 R0R1R2R3 P02011 P10121 P24003 P30210 P41030 SUM7375 Current (safe) Allocation Step 1: alloc  alloc' Step 2: computations above yield: avail=

© 2004, D. J. Foreman 39 Banker's Algorithm - Step 3 Avail= = # currently available for all R j Compute: maxc - alloc for each P i (look for any satisfiable) alloc' for P2 is (from prev. table) maxc[2, 0] - alloc'[2,0] = = 1 ≤ avail[0] ≡ 1 maxc[2, 1] - alloc'[2,1] = = 1 ≤ avail[1] ≡ 2 etc R0R1R2R3 P03214 P10252 P25105 P31530 P43033 Maximum Claims If no P i satisfies: maxc - alloc' If alloc'=0 for all P i

© 2004, D. J. Foreman40 Banker's algorithm for P0 maxc[0, 0] - alloc'[0,0] = = 1 ≤ avail[0] ≡ 1 maxc[0, 1] - alloc'[0,1] = = 1 ≤ avail[1] ≡ 2 maxc[0, 2] - alloc'[0,2] = = 0 ≤ avail[2] ≡ 2 maxc[0, 3] - alloc'[0,3] = = 3 ≤ avail[3] ≡ 2 Therefore P0 cannot make a transition to a safe state from the current state. Likewise for P1

© 2004, D. J. Foreman41 Banker's Algorithm - Step 4 So P2 can claim, use and release all its R i giving a new availability vector: avail2[0]=avail[0]+alloc'[2,0]=1+4=5 avail2[1]=avail[1]+alloc'[2,1]=2+0=2 avail2[2]=avail[2]+alloc'[2,2]=2+0=2 avail2[3]=avail[3]+alloc'[2,3]=2+3=5 avail2= so at least one P can get its max claim satisfied

Agenda Part 1 – quick review of resources, I/O, kernel, interrupts Part 2 – processes, threads, synchronization, concurrency Part 3 – specific synchronization considerations, concerns and techniques Part 4 – classic synchronization problems, examples, and algorithms Part 5 – Review of old exams