Multiprocessor and Real-Time Scheduling

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Chapter 9 Uniprocessor Scheduling
1 Multiprocessor and Real-Time Scheduling Chapter 10.
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
1 Multiprocessor and Real-Time Scheduling Chapter 10.
1 Multiprocessor and Real-Time Scheduling Chapter 10.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling The key to multiprogramming is scheduling Scheduling is done to meet the goals of –Response time.
Tao Yang, UCSB CS 240B’03 Unix Scheduling Multilevel feedback queues –128 priority queues (value: 0-127) –Round Robin per priority queue Every scheduling.
Multiprocessor and Real-Time Scheduling Chapter 10.
1 Traditional OSes with Soft Real- Time Scheduling Module 3.3 For a good summary, visit:
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
1 Scheduling in Representative Operating Systems.
1 Uniprocessor Scheduling Chapter 9. 2 Aims of Scheduling Assign processes to be executed by the processor(s) Response time Throughput Processor efficiency.
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
1 Lecture 10: Uniprocessor Scheduling. 2 CPU Scheduling n The problem: scheduling the usage of a single processor among all the existing processes in.
1 Chapter 13 Embedded Systems Embedded Systems Characteristics of Embedded Operating Systems.
Chapter 10 Multiprocessor and Real-Time Scheduling
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
1 Uniprocessor Scheduling Chapter 9. 2 Aim of Scheduling Main Job: Assign processes to be executed by the processor(s) and processes to be loaded in main.
Operating System 10 MULTIPROCESSOR AND REAL-TIME SCHEDULING
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
CSC 360- Instructor: K. Wu CPU Scheduling. CSC 360- Instructor: K. Wu Agenda 1.What is CPU scheduling? 2.CPU burst distribution 3.CPU scheduler and dispatcher.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 10 Multiprocessor and Real-Time Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock.
IT320 OPERATING SYSTEM CONCEPTS Unit 6: Processor Scheduling September 2012 Kaplan University 1.
Chapter 10 Multiprocessor and Real-Time Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Page 1 2P13 Week 9. Page 2 Table 9.2 Scheduling Criteria User Oriented, Performance Related Turnaround time This is the interval of time between the submission.
Multiprocessor and Real-Time Scheduling Chapter 10.
Multiprocessor and Real-Time Scheduling
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
1 Multiprocessor Scheduling Module 3.1 For a good summary on multiprocessor and real-time scheduling, visit:
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
ICOM Noack Scheduling For Distributed Systems Classification – degree of coupling Classification – granularity Local vs centralized scheduling Methods.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
1 Uniprocessor Scheduling Chapter 9. 2 Aim of Scheduling Response time Throughput Processor efficiency.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling Response time Throughput Processor efficiency.
Uniprocessor Scheduling
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling Assign processes to be executed by the processor or processors: –Response time –Throughput –Processor.
Real time scheduling G.Anuradha Ref:- Stallings. Real time computing Correctness of the system depends not only on the logical result of computation,
Operating Systems (CS 340 D) Dr. Abeer Mahmoud Princess Nora University Faculty of Computer & Information Systems Computer science Department.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
For a good summary, visit:
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Principles of Operating Systems Lecture 16 Abhishek Dubey Daniel Balasubramanian Real Time Scheduling Fall 2014.
Multiprocessor, Multicore, and Real-Time Scheduling Chapter 10
Operating Systems (CS 340 D)
Chapter 2 Scheduling.
Uniprocessor Scheduling
Chapter 2.2 : Process Scheduling
CPU SCHEDULING.
Multiprocessor and Real-Time Scheduling
Operating System 10 MULTIPROCESSOR AND REAL-TIME SCHEDULING
CS703 – Advanced Operating Systems
Chapter 10 Multiprocessor and Real-Time Scheduling
CS703 - Advanced Operating Systems
CS703 - Advanced Operating Systems
Chapter 3: Processes Process Concept Process Scheduling
Presentation transcript:

Multiprocessor and Real-Time Scheduling Chapter 10

Uniprocessor Scheduling What is the name for RR without preemption? What is the main difficulty in SPN and SRT algorithms? What is the name of the algorithm achieved on introducing preemption into SPN? What algorithm increases the priority of a process with its age? What do we mean by “Feedback Scheduling”? What is the main idea behind “Fair Share Scheduling”?

Mid-Term Test Guidelines Full guide is now online The excluded topics are also listed there Max Marks=20 Time = 80 minutes 8-10 Short Questions expecting short answers, some numerical problems may be included (Some questions may have sub-parts) Scope: chapter 1,2,3,4,5,6,9

Mid-Term Test Guidelines Attempt textbook's end of chapter problems but do not attempt theorem proving and Lemma derivation type questions. Attempt all worksheets again and revise the questions attempted during lectures Do not worry about acronyms; full form will be provided in the exam Build your math skills Get a haircut Use express checkout in local stores

Classifications of Multiprocessor Systems Loosely coupled multiprocessor Each processor has its own memory and I/O channels Functionally specialized processors Such as I/O processor or a graphics coprocessor Controlled by a master processor Tightly coupled multiprocessing Processors share main memory Controlled by operating system Now we look at granularity of systems

Independent Parallelism All processes are separate applications or jobs For example, Multi-user Linux No synchronization More than one processor is available Average response time to users is less

Coarse and Very Coarse-Grained Parallelism Synchronization among processes at a very gross level Good for concurrent processes running on a multiprogrammed uniprocessor Can by supported on a multiprocessor with little change The synchronization interval is upto 2000 instructions for coarse and then upto 1 million instructions for very coarse granularity

Medium-Grained Parallelism Parallel processing or multitasking within a single application Single application is a collection of threads Threads usually interact frequently Medium grained parallelism is reached when the frequency of interaction is at least after every 20 instructions

Fine-Grained Parallelism Highly parallel applications Specialized and fragmented area Applications need to interact (one the average) at least once per a block of 20 instructions

Scheduling It is a 2-D problem Three issues Which process runs next? Which processor runs the next process? Three issues Assignment of processes to processors Use of multiprogramming on individual processors Actual dispatching of a process

Assignment of Processes to Processors Treat processors as a pooled resource and assign process to processors on demand Permanently assign process to a processor Dedicate short-term queue for each processor Less overhead Processor could be idle while another processor has a backlog

Assignment of Processes to Processors Global queue Schedule to any available processor (even after blocking; may cause overheads if migrated to a new processor; affinity scheduling reduces this problem) Master/slave architecture Key kernel functions always run on a particular processor Master is responsible for scheduling Slave sends service request to the master Disadvantages Failure of master brings down whole system Master can become a performance bottleneck

Multiprocessor Scheduling Fig 8-12 (Tanenbaum)

Assignment of Processes to Processors Peer architecture Operating system can execute on any processor Each processor does self-scheduling Complicates the operating system Make sure two processors do not choose the same process Many alternatives possible between the two extremes

Multiprogramming on Processors Multiprogramming is key to performance improvement For coarse grained parallelism, multiprogramming is desirable to avoid the performance problems For high granularity, it may not be desirable to swap out a thread while the other thread is still running on a different processor In short-term scheduling (pick and dispatch), the overly complicated scheduling schemes may not be necessary

Process Scheduling Single queue for all processors Multiple queues are used for priorities All queues feed to the common pool of processors Specific scheduling disciplines is less important with more than one processor Maybe the simple FCFS for each processor is sufficient!!

Thread Scheduling Executes separate from the rest of the process An application can be a set of threads that cooperate and execute concurrently in the same address space Threads running on separate processors yield a dramatic gain in performance (In a uniprocessor system, they just overlap I/O blocking with another thread)

Multiprocessor Thread Scheduling Load sharing Threads are not assigned to a particular processor (in contrast to load balancing with static assignments). They wait in a global queue Gang scheduling A set of related threads is scheduled to run on a set of processors at the same time

Multiprocessor Thread Scheduling Dedicated processor assignment Threads are assigned to a specific processor Dynamic scheduling Number of threads can be altered during course of execution

Load Sharing Load is distributed evenly across the processors No centralized scheduler required Use global queues Three versions: FCFS (each jobs admits its threads sequentially to global queue); Smallest number of threads first and its preemptive version

Disadvantages of Load Sharing Central queue needs mutual exclusion May be a bottleneck when more than one processor looks for work at the same time Preemptive threads are unlikely to resume execution on the same processor Cache use is less efficient If all threads are in the global queue, all threads of a program will not gain access to the processors at the same time STILL IT IS THE MOST COMMONLY USED SCHEME

A Problem With Load Sharing (Ref: Ch 8 Tanenbaum)

The Solution: Gang Scheduling Simultaneous scheduling of threads that make up a single process on a set of processors Useful for applications where performance severely degrades when any part of the application is not running Threads often need to synchronize with each other

Scheduling Groups

Multiple Threads Across Multiple CPU’s (Tanen Ch8)

Dedicated Processor Assignment When application is scheduled, its threads are assigned to a processor No multiprogramming of processors Some processors may be idle (tolerable if hundreds of processors are present)

Dynamic Scheduling Number of threads in a process are altered dynamically by the application Operating system adjusts the load to improve use Assign idle processors New arrivals may be assigned to a processor that is used by a job currently using more than one processor Hold request until processor is available New arrivals will be given a processor before existing running applications

Real-Time Systems Correctness of the system depends not only on the logical result of the computation but also on the time at which the results are produced Tasks or processes attempt to control or react to events that take place in the outside world These events occur in “real time” and process must be able to keep up with them

Real-Time Systems Control of laboratory experiments Process control plants Robotics Air traffic control Telecommunications Military command and control systems

Characteristics of Real-Time Operating Systems Deterministic Operations are performed at fixed, predetermined times or within predetermined time intervals Concerned with how long the operating system delays before acknowledging an interrupt

Characteristics of Real-Time Operating Systems Responsiveness How long, after acknowledgment, it takes the operating system to service the interrupt Includes amount of time to begin execution of the interrupt Includes the amount of time to perform the interrupt

Characteristics of Real-Time Operating Systems User control User specifies priority Specify paging What processes must always reside in main memory Disks algorithms to use Rights of processes

Characteristics of Real-Time Operating Systems Reliability Degradation of performance may have catastrophic consequences Attempt either to correct the problem or minimize its effects while continuing to run Most critical, high priority tasks execute

Features of Real-Time Operating Systems Fast context switch Small size Ability to respond to external interrupts quickly Multitasking with interprocess communication tools such as semaphores, signals, and events Files that accumulate data at a fast rate

Features of Real-Time Operating Systems Use of special sequential files that can accumulate data at a fast rate Preemptive scheduling base on priority Minimization of intervals during which interrupts are disabled Delay tasks for fixed amount of time Special alarms and timeouts

Scheduling of a Real-Time Process

Scheduling of a Real-Time Process

Scheduling of a Real-Time Process

Scheduling of a Real-Time Process

Real-Time Scheduling Static table-driven Determines at run time when a task begins execution Static priority-driven preemptive Traditional priority-driven scheduler is used Dynamic planning-based Dynamic best effort

Deadline Scheduling Real-time applications are not concerned with speed but with completing tasks Scheduling tasks with the earliest deadline minimized the fraction of tasks that miss their deadlines

Deadline Scheduling Information used Ready time Starting deadline Completion deadline Processing time Resource requirements Priority Subtask scheduler

Two Tasks

Rate Monotonic Scheduling Assigns priorities to tasks on the basis of their periods Highest-priority task is the one with the shortest period

Periodic Task Timing Diagram

Linux Scheduling Scheduling classes SCHED_FIFO: First-in-first-out real-time threads SCHED_RR: Round-robin real-time threads SCHED_OTHER: Other, non-real-time threads Within each class multiple priorities may be used

UNIX SVR4 Scheduling Highest preference to real-time processes Next-highest to kernel-mode processes Lowest preference to other user-mode processes

SVR4 Dispatch Queues

Windows 2000 Scheduling Priorities organized into two bands or classes Real-time Variable Priority-driven preemptive scheduler