Computer Science Lecture 6, page 1 CS677: Distributed OS Processes and Threads Processes and their scheduling Multiprocessor scheduling Threads Distributed.

Slides:



Advertisements
Similar presentations
Scheduling Introduction to Scheduling
Advertisements

Uniprocessor Scheduling Chapter 9 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
CPU Scheduling.
CPU Scheduling Tanenbaum Ch 2.4 Silberchatz and Galvin Ch 5.
OS, , Part II CPU Scheduling Department of Computer Engineering, PSUWannarat Suntiamorntut.
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
WELCOME TO THETOPPERSWAY.COM
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Computer Science Lecture 6, page 1 CS677: Distributed OS Processes and Threads Processes and their scheduling Multiprocessor scheduling Threads Distributed.
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
CPU Scheduling. Schedulers Process migrates among several queues –Device queue, job queue, ready queue Scheduler selects a process to run from these queues.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 9 Chapter 5: CPU Scheduling (cont)
CSC 360- Instructor: K. Wu CPU Scheduling. CSC 360- Instructor: K. Wu Agenda 1.What is CPU scheduling? 2.CPU burst distribution 3.CPU scheduler and dispatcher.
1 Process States (1) Possible process states –running –blocked –ready Transitions between states shown.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Chapter 5 – CPU Scheduling (Pgs 183 – 218). CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not.
Scheduling Chap 2. Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods of I/O wait –a CPU-bound process –an I/O bound.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Scheduling Strategies Operating Systems Spring 2004 Class #10.
ICS Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Cpr E 308 Spring 2004 Real-time Scheduling Provide time guarantees Upper bound on response times –Programmer’s job! –Every level of the system Soft versus.
CPU Scheduling Basic Concepts. Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5 Processor Scheduling Introduction Processor (CPU) scheduling is the sharing of the processor(s) among the processes in the ready queue.
Operating Systems Scheduling. Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b) An I/O-bound process. Scheduling.
CIS250 OPERATING SYSTEMS Chapter 6 - CPU Scheduling Basic Concepts The objective of multi-programming is have a program running at all times Maximize.
ITFN 2601 Introduction to Operating Systems Lecture 4 Scheduling.
Process Control Management
Exam Review Andy Wang Operating Systems COP 4610 / CGS 5765.
Distributed (Operating) Systems -Processes and Threads-
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
CPU Scheduling Operating Systems CS 550. Last Time Deadlock Detection and Recovery Methods to handle deadlock – Ignore it! – Detect and Recover – Avoidance.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecture 5 Scheduling. Today CPSC Tyson Kendon Updates Assignment 1 Assignment 2 Concept Review Scheduling Processes Concepts Algorithms.
lecture 5: CPU Scheduling
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Processes and Threads Processes and their scheduling
OPERATING SYSTEMS CS3502 Fall 2017
Networks and Operating Systems: Exercise Session 2
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 2 Scheduling.
CPU scheduling 6. Schedulers, CPU Scheduling 6.1. Schedulers
MODERN OPERATING SYSTEMS Third Edition ANDREW S
ICS 143 Principles of Operating Systems
CS 143A - Principles of Operating Systems
Introduction What is an operating system bootstrap
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Process Management Scheduling
CS703 – Advanced Operating Systems
Processor Scheduling Hank Levy 1.
Uniprocessor scheduling
Uniprocessor Process Management & Process Scheduling
Concurrency and Threading: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Andy Wang Operating Systems COP 4610 / CGS 5765
Scheduling 21 May 2019.
CPU Scheduling ( Basic Concepts)
Andy Wang Operating Systems COP 4610 / CGS 5765
Uniprocessor Process Management & Process Scheduling
Presentation transcript:

Computer Science Lecture 6, page 1 CS677: Distributed OS Processes and Threads Processes and their scheduling Multiprocessor scheduling Threads Distributed Scheduling/migration

Computer Science Lecture 6, page 2 CS677: Distributed OS Processes: Review Multiprogramming versus multiprocessing Kernel data structure: process control block (PCB) Each process has an address space –Contains code, global and local variables.. Process state transitions Uniprocessor scheduling algorithms –Round-robin, shortest job first, FIFO, lottery scheduling, EDF Performance metrics: throughput, CPU utilization, turnaround time, response time, fairness

Computer Science Lecture 6, page 3 CS677: Distributed OS Process Behavior Processes: alternate between CPU and I/O CPU bursts –Most bursts are short, a few are very long (high variance) –Modeled using hyperexponential behavior –If X is an exponential r.v. Pr [ X <= x] = 1 – e -  x E[X] = 1/  –If X is a hyperexponential r.v. Pr [X <= x] = 1 – p e -  x -(1-p) e -  x E[X] = p/  p)/ 

Computer Science Lecture 6, page 4 CS677: Distributed OS Process Scheduling Priority queues: multiples queues, each with a different priority –Use strict priority scheduling –Example: page swapper, kernel tasks, real-time tasks, user tasks Multi-level feedback queue –Multiple queues with priority –Processes dynamically move from one queue to another Depending on priority/CPU characteristics –Gives higher priority to I/O bound or interactive tasks –Lower priority to CPU bound tasks –Round robin at each level

Computer Science Lecture 6, page 5 CS677: Distributed OS Processes and Threads Traditional process –One thread of control through a large, potentially sparse address space –Address space may be shared with other processes (shared mem) –Collection of systems resources (files, semaphores) Thread (light weight process) –A flow of control through an address space –Each address space can have multiple concurrent control flows –Each thread has access to entire address space –Potentially parallel execution, minimal state (low overheads) –May need synchronization to control access to shared variables

Computer Science Lecture 6, page 6 CS677: Distributed OS Threads Each thread has its own stack, PC, registers –Share address space, files,…

Computer Science Lecture 6, page 7 CS677: Distributed OS Why use Threads? Large multiprocessors need many computing entities (one per CPU) Switching between processes incurs high overhead With threads, an application can avoid per-process overheads –Thread creation, deletion, switching cheaper than processes Threads have full access to address space (easy sharing) Threads can execute in parallel on multiprocessors

Computer Science Lecture 6, page 8 CS677: Distributed OS Why Threads? Single threaded process: blocking system calls, no parallelism Finite-state machine [event-based]: non-blocking with parallelism Multi-threaded process: blocking system calls with parallelism Threads retain the idea of sequential processes with blocking system calls, and yet achieve parallelism Software engineering perspective –Applications are easier to structure as a collection of threads Each thread performs several [mostly independent] tasks

Computer Science Lecture 6, page 9 CS677: Distributed OS Multi-threaded Clients Example : Web Browsers Browsers such as IE are multi-threaded Such browsers can display data before entire document is downloaded: performs multiple simultaneous tasks –Fetch main HTML page, activate separate threads for other parts –Each thread sets up a separate connection with the server Uses blocking calls –Each part (gif image) fetched separately and in parallel –Advantage: connections can be setup to different sources Ad server, image server, web server…

Computer Science Lecture 6, page 10 CS677: Distributed OS Multi-threaded Server Example Apache web server: pool of pre-spawned worker threads –Dispatcher thread waits for requests –For each request, choose an idle worker thread –Worker thread uses blocking system calls to service web request

Computer Science Lecture 6, page 11 CS677: Distributed OS Thread Management Creation and deletion of threads –Static versus dynamic Critical sections –Synchronization primitives: blocking, spin-lock (busy-wait) –Condition variables Global thread variables Kernel versus user-level threads

Computer Science Lecture 6, page 12 CS677: Distributed OS User-level versus kernel threads Key issues: Cost of thread management –More efficient in user space Ease of scheduling Flexibility: many parallel programming models and schedulers Process blocking – a potential problem

Computer Science Lecture 6, page 13 CS677: Distributed OS User-level Threads Threads managed by a threads library –Kernel is unaware of presence of threads Advantages: –No kernel modifications needed to support threads –Efficient: creation/deletion/switches don’t need system calls –Flexibility in scheduling: library can use different scheduling algorithms, can be application dependent Disadvantages –Need to avoid blocking system calls [all threads block] –Threads compete for one another –Does not take advantage of multiprocessors [no real parallelism]

Computer Science Lecture 6, page 14 CS677: Distributed OS User-level threads

Computer Science Lecture 6, page 15 CS677: Distributed OS Kernel-level threads Kernel aware of the presence of threads –Better scheduling decisions, more expensive –Better for multiprocessors, more overheads for uniprocessors

Computer Science Lecture 6, page 16 CS677: Distributed OS Light-weight Processes Several LWPs per heavy-weight process User-level threads package –Create/destroy threads and synchronization primitives Multithreaded applications – create multiple threads, assign threads to LWPs (one-one, many-one, many-many) Each LWP, when scheduled, searches for a runnable thread [two-level scheduling] –Shared thread table: no kernel support needed When a LWP thread block on system call, switch to kernel mode and OS context switches to another LWP

Computer Science Lecture 6, page 17 CS677: Distributed OS LWP Example

Computer Science Lecture 6, page 18 CS677: Distributed OS Thread Packages Posix Threads (pthreads) –Widely used threads package –Conforms to the Posix standard –Sample calls: pthread_create,… –Typical used in C/C++ applications –Can be implemented as user-level or kernel-level or via LWPs Java Threads –Native thread support built into the language –Threads are scheduled by the JVM