Processes and Schedulers. What is a Process Process: An execution stream and its associated state Execution Stream – Set of instructions – “Thread of.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Why Concurrency? Allows multiple applications to run at the same time  Analogy: juggling.
Processes CSCI 444/544 Operating Systems Fall 2008.
Scheduling in Batch Systems
OS Spring ’ 04 Scheduling Operating Systems Spring 2004.
Introduction to Operating Systems – Windows process and thread management In this lecture we will cover Threads and processes in Windows Thread priority.
Review: Operating System Manages all system resources ALU Memory I/O Files Objectives: Security Efficiency Convenience.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
CS-502 Fall 2006Processes in Unix, Linux, & Windows 1 Processes in Unix, Linux, and Windows CS502 Operating Systems.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Job scheduling Queue discipline.
Process Concept An operating system executes a variety of programs
Processes in Unix, Linux, and Windows CS-502 Fall Processes in Unix, Linux, and Windows CS502 Operating Systems (Slides include materials from Operating.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Threads & Scheduling What are threads?
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 CPU SCHEDULING.
CSC 501 Lecture 2: Processes. Process Process is a running program a program in execution an “instantiation” of a program Program is a bunch of instructions.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Operating Systems Lecture Notes CPU Scheduling Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
1 CSE451 Scheduling Autumn 2002 Gary Kimura Lecture #6 October 11, 2002.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Process A program in execution. But, What does it mean to be “in execution”?
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5 Processor Scheduling Introduction Processor (CPU) scheduling is the sharing of the processor(s) among the processes in the ready queue.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
Processes Questions answered in this lecture: What is a process? How does the dispatcher context-switch between processes? How does the OS create a new.
Cpr E 308 Spring 2005 Process Scheduling Basic Question: Which process goes next? Personal Computers –Few processes, interactive, low response time Batch.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
1  process  process creation/termination  context  process control block (PCB)  context switch  5-state process model  process scheduling short/medium/long.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Notes on Processes, Context, Context Switching The following slides contain partially quoted statements from various Wikipedia pages.
What is an Operating System? Various systems and their pros and cons –E.g. multi-tasking vs. Batch OS definitions –Resource allocator –Control program.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU Scheduling CS Introduction to Operating Systems.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Scheduling.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Processes and threads.
Process Management Process Concept Why only the global variables?
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Chapter 2.2 : Process Scheduling
Process management Information maintained by OS for process management
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
Scheduling Adapted from:
CPU SCHEDULING.
Process Scheduling Decide which process should run and for how long
Processes in Unix and Windows
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

Processes and Schedulers

What is a Process Process: An execution stream and its associated state Execution Stream – Set of instructions – “Thread of control” Process State – Hardware state Privilege level, segments, page tables – OS State Priority, I/O Buffers, Heap, memory map – Resource state I/O requests An abstraction to make it easier to program both OS and applications – Encapsulate state into manageable unit

Programs and Processes A process is not a program Program: Static code and static data int foo() { return 0; } int main() { foo(); return 0; } Program int foo() { return 0; } int main() { foo(); return 0; } Heap Stack Registers Process OS can host multiple processes of same program – E.g. many users can run ‘ls’ at the same time Once program can invoke multiple processes – E.g. make runs many processes to compile code No one-to-one mapping between programs and processes

Threads and Processes A process is different than a thread – Conceptually (On Linux it is more complicated) Thread: Separate execution streams in same address space – “Lightweight process” int foo() { return 0; } int main() { foo(); return 0; } Heap Stack Registers Process Can have multiple threads within a process int foo() { return 0; } int main() { foo(); return 0; } Heap Threads Stack Registers Stack Registers

System Classification Uniprogramming: Only one process at a time – Examples: Original systems and older PC Oses DOS – Advantages: Easier for OS designer – Disadvantages: Terrible utilization, poor usability Multiprogramming: Multiple processes at a time – Examples: Every modern OS you use – Note: Multiprogramming is different from multiprocessing Multiprocessing: Systems with multiple processors – Advantages: Better utilization and usability – Disadvantages: Complex OS design

Multiprogramming OS requirements for multiprogramming – Policy to determine process to run – Mechanism to switch between processes – Methods to protect processes from one another Memory management system Separation of policy and mechanism – Recurring theme in OS design – Policy: Decision maker based on some metric Scheduler – Mechanism: Low level code that implements the decision Dispatcher/Context Switch

Multiprogramming and Memory Many OSes didn’t do a very good job of combining these Early PC OSes didn’t protect memory – MacOS and Windows – Each process could access all of memory Same address space Basically a giant multithreaded environment All modern OSes not include memory map in PCB – Processes cannot access each other memory

Dispatch Mechanism OS maintains list of all processes Each process has a mode – Running: Executing on the CPU – Ready: Waiting to execute on CPU – Blocked: Waiting for I/O or synchronization with another thread Dispatch Loop while (1) { run process for a while; stop process and save its state; load state of another process; } How does dispatcher gain control? What execution state must be saved/restored?

How does dispatcher gain control? Must change from user to system mode – Problem: Only one CPU, and CPU can only do one thing at a time – A user process running means the dispatcher isn’t Two ways OS gains control Traps: Events caused by process execution – System calls, page faults, Exceptions (segfault, etc) Hardware interrupts: Events external to process – Typing at keyboard, network packet arrivals – Control switch to OS via Interrupt Service Routine (ISR) How does OS guarantee it will regain control?

Approaches to dispatcher Option 1: Cooperative multitasking – Trust process to invoke dispatcher – Linux: Default for kernel code schedule() – Disadvantage: A mistake in one part of the code can lock up entire system Option 2: True multitasking – Configure hardware to periodically invoke dispatcher – Hardware generated timer interrupt Timer ISR invokes dispatcher – Linux: Enabled for user processes HZ – Processes run for some multiple of timer “ticks” (interrupts) Process time slice

What state must be saved? OS must track state of processes – On every trap/interrupt save process state in “process control block” (PCB) Why on every trap/interrupt? Data structure problem: How to manages all PCBs Information stored in PCB – Execution state General registers, control registers, CPU flags, RSP, RIP, page tables – OS state Memory map, heap space – I/O status Open files and sockets – Scheduling information Execution mode, priority – Accounting information Owner, PID – Plus lots more

Context Switch implementation Machine dependent code (Assembly!) – Different for MIPS, ARM, x86, etc. – Save process state to PCB Tricky: OS must save state without changing state Requires special hardware support – Save process state on each trap/interrupt – Very nasty x86 has TSS (PCB) that most OSes avoid

Process Creation Two ways to create a process – Build one from scratch – Clone an existing one Option 1: From scratch (Windows – CreateProcess(…)) – Load specified code and data into memory – Create empty call stack – Create and initialize PCB (make it look like a context switch) – Add process to ready list Option 2: Cloning (UNIX – fork()) – Stop current process and save its state – Copy code, data, stack and PCB – Add new Process PCB to ready list – Do we really need to copy everything?

Creating a process (Windows and UNIX) BOOL WINAPI CreateProcess( _In_opt_ LPCTSTR lpApplicationName, _Inout_opt_ LPTSTR lpCommandLine, _In_opt_ LPSECURITY_ATTRIBUTES lpProcessAttributes, _In_opt_ LPSECURITY_ATTRIBUTES lpThreadAttributes, _In_ BOOL bInheritHandles, _In_ DWORD dwCreationFlags, _In_opt_ LPVOID lpEnvironment, _In_opt_ LPCTSTR lpCurrentDirectory, _In_ LPSTARTUPINFO lpStartupInfo, _Out_ LPPROCESS_INFORMATION lpProcessInformation ); int fork(); Windows UNIX

Creating processes in UNIX Combination of fork() and exec(…) – fork(): Clone current process – exec(…): copy new program on top of current process int main() { int pid; char * cmd = “/bin/sh”; pid = fork(); if (pid == 0) { // child process exec(cmd); // exec does not return, WE NEVER GET HERE } else { // parent process – wait for child to finish wait(pid); } Advantage: Flexible, clean, simple

Process Abstraction Processes are a low level component – Provide an abstraction to build on Fundamental OS design – Provide abstract units (resources) that high level policies can act on Resources – Resources are high level units managed by OS – CPU time, memory, disk space, I/O bandwidth How does the OS manage resources?

Resources Preemptible – Resource can be taken away and used by somebody else – Example: CPU Non-preemptible – One a resource is assigned it can only be returned voluntarily – Example: Disk space OS must balance set of resources and requests for those resources – OS management depends on type of resource

Decisions about Resources Allocation: Which process gets which resource – Which resources should each process get? – Space sharing: Control concurrent access to resource – Implication: Resources are not easily preemptible – Example: disk space Scheduling: How long process keeps resource – In which order should requests be serviced – Time sharing: More resources requested than exist – Implication: Resource is preemtible – Example: CPU time

Role of Dispatcher vs. Scheduler Dispatcher – Low-level mechanism – Responsibility: Context-switch Change mode of old process to either WAITING or BLOCKED Save execution state of old process in PCB Load state of new process from PCB Change mode of new processes to RUNNING Switch to user mode privilege Jump to process instruction Scheduler – Higher level policy – Responsibility: Decide which process to dispatch to CPU could be allocated – Parallel and Distributed Systems

Scheduling Performance Metrics Minimize response time – Increase interactivity (responsiveness of user interfaces) Maximize resource utilization – Keep CPU and disks busy Minimize overhead – Reduce context switches (number and cost) Distributed resources fairly – Give each user/process same percentage of CPU

Scheduling Algorithms Process (job) model – Process alternates between CPU and I/O bursts – CPU bound job: long CPU bursts – I/O bound job: short CPU bursts Don’t know before execution – Need to handle full range of possible workloads Scheduling Algorithms – First-Come-First-Served (FCFS) – Shortest Job First (SJF) or Shortest-Time-Completion-First (STCF) – Round-Robin (RR) – Priority Scheduling – Other scheduling algorithms that are actually used

First Come First Served (FCFS) Simplest Scheduling algorithm – First job that requests CPU is allocated CPU – Nonpreemptive Advantage: Simple Implementation with FIFO queue Disadvantage: Response time depends on arrival order – Unfair to later jobs (especially if the system has long jobs) Job A Job B Job C Time CPU Uniprogramming: Run job to completion Multiprogramming: Put job at back of queue when performing I/O

Convoy Effect Short running jobs stuck waiting for long jobs – Example: 1 CPU bound job, 3 I/O bound jobs Problems – Reduces utilization of I/O devices – Hurts response time of short jobs A A B B C C Time CPU C C A A B B C C C C A A B B C C C C A A B B C C C C Idle Disk

Shortest Job First Minimizes average response time Job A Job B Job C Time CPU FCFS if simultaneous arrival Provably optimal (given no preemption) to reduce response time – Short job improved more than long job is hurt Not practical: Cannot know burst lengths (I/O + CPU) – Can only use past behavior to predict future behavior

Shortest Time to Completion First (STCF) SJF with preemption – New process arrives w/ short CPU burst – Shorter than remaining time of current job A A B B D D Time CPU A A C C B B D D Time CPU A A C C SJF without preemption Job Submission: A at time 0, B/C/D at time t t0

Shortest Remaining Processing Time (SRPT) STCF for batch workloads Used in distributed systems – Provides maximum throughput (transactions/sec) – Minor risk of starvation Very popular in web servers and similar systems

Round Robin (RR) Practical approach to support time-sharing – Run job for a time-slice and then go to back of queue – Preempted if still running at end of time-slice Advantages – Fair allocation of CPU across jobs – Low average response time when job lengths vary widely Avoids worst case scenarios and starvation A A A A CPU B B C C A A B B C C A A B B Time

Disadvantages of Round Robin Poor average response time when job sizes are identical – E.g. 10 jobs each require 10 time slices – All complete after 100 time slices – Even FCFS is better How large should the time slice be? – Depends on the workload! – Tradeoff between throughput and responsiveness Batch vs. interactive workloads

Priority based scheduling Priorities assigned to each process – Run highest priority job in system (that is ready) – Round robin among equal priority levels Aside: How to parse priority numbers – Is low high or is high high? (Depends on system) Static vs. Dynamic priorities – Some jobs have static priority assignments (kernel threads) – Others need to be dynamic (user applications) – Should priorities change as a result of scheduling decisions?

Real world scheduling Current schedulers exhibit combinations of above approaches – Include priorities, round robin queues, queue reordering, and much more – There is no single good algorithm – Nor is there a way to even measure “goodness” Most schedulers designed based on heuristics and best guesses of potential workloads – Linux has a lot of complexity to detect batch vs. interactive processes (can change during execution) Many times a scheduler will work great 99% of the time but completely fall over for 1% of workloads