Download presentation
Presentation is loading. Please wait.
Published byLee Grant Modified over 9 years ago
1
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
2
2 Classifications of Multiprocessor n Loosely coupled or distributed multiprocessor, or cluster u each processor has its own memory and I/O channels n Functionally specialized processors u such as I/O processor u controlled by a master processor n Tightly coupled multiprocessing u processors share main memory u controlled by operating system n What is the main concern in general for multiprocessing? u Processor utilization vs. throughput
3
Synchronization Granularity and Processes - Summary 3
4
4 Independent Parallelism n Separate applications or processes running n No synchronization among processes n Example is time sharing u average response time to users is less
5
5 Coarse and Very Coarse-Grained Parallelism n Synchronization among processes at a very gross level n Good for concurrent processes running on a multiprogrammed uniprocessor or multiple processors n Distributed processing across network nodes to form a single computing environment n Good when there is infrequent interaction among processes u overhead of network would slow down communications
6
6 Medium-Grained Parallelism n Parallel processing or multitasking within a single application n Single application is a collection of threads (or processes) n Threads usually interact frequently
7
7 Fine-Grained Parallelism n Highly parallel applications u Usually much more complex use of parallelism than is found in the use of threads n Specialized area
8
8 Scheduling Design Issues Scheduling on a multiprocessor involves: n Use of multiprogramming on individual processors u Similar to uni-processor scheduling n Assignment of processes to processors n Actual dispatching of a process
9
9 Assignment of Processes to Processors Two approaches: n Treat processors as a pooled resource and assign process to processors on demand u A common queue: Schedule to any available processor u Local queues: d ynamic load balancing F processes or threads are moved from a queue for one processor to a queue for another processor. n Permanently assign a process to a processor u Allows group or gang scheduling u Dedicate short-term queue for each processor u Advantage and disadvantage? F Less overhead in scheduling F A processor could be idle while another processor has a backlog
10
10 Process Scheduling n Usually processes are not dedicated to processors n Queuing u A single queue for all processors u Multiple queues based on priorities F All queues feed to the common pool of processors n Specific scheduling disciplines is less important with more than one processor u Different scheduling methods can be used for different processors
11
11 Comparison of One and Two Processors – An Example
12
12 Thread Scheduling n An application can consist a set of threads that cooperate and execute concurrently in the same address space n Threads running on separate processors could yield a dramatic gain in performance for some applications
13
Approaches to Thread Scheduling 13 Load Sharing Gang Scheduling Dedicated Processor Assignment Dynamic Scheduling Four approaches for multiprocessor thread scheduling and processor assignment are: a set of related threads scheduled to run on a set of processors at the same time, on a one-to-one basis processes are not assigned to a particular processor provides implicit scheduling defined by the assignment of threads to processors the number of threads in a process can be altered during the course of execution
14
14 Load Sharing n Load is distributed evenly across the processors u Simplest and carries over most directly from a uni-processor system n Assures no processor is idle n No centralized scheduler required n Use global queues
15
15 Disadvantages of Load Sharing n Central queue needs mutual exclusion u may be a bottleneck when more than one processor looks for work at the same time n Preemptive threads are unlikely to resume execution on the same processor u In a loosely-coupled system, cache usage is less efficient n If all threads are in the global queue, all threads of a program will not gain access to the processors at the same time
16
16 Gang Scheduling n Simultaneous scheduling of related threads that make up a single process n Useful for applications where performance severely degrades when any part of the application is not running u Better for dedicated applications u Lower scheduling overhead for those processes n Threads often need to synchronize with each other n Number of processors may be smaller than the number of threads on some machines.
17
17 Dedicated Processor Assignment n When application is scheduled, its threads are assigned to a processor n Some processors may be idle n Avoids process switching
18
18 Dynamic Scheduling n Number of threads in a process are altered dynamically by the application n Operating system adjusts the load to improve usage u assign idle processors u new arrivals may be assigned to a processor that is used by a job currently using more than one processor u hold request until processor is available u new arrivals will be given a processor before existing running applications
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.