Computer Science Lecture 7, page 1 CS677: Distributed OS Multiprocessor Scheduling Will consider only shared memory multiprocessor Salient features: –One.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Scheduling.
Advertisements

CS-495 Distributed Systems Fabián E. Bustamante, Winter 2004 Processes Threads & OS Threads in distributed systems Object servers Code migration Software.
Ch 11 Distributed Scheduling –Resource management component of a system which moves jobs around the processors to balance load and maximize overall performance.
Processes: Code Migration Chapter 3 Will Cameron CSC 8530 Dr. Schragger.
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed Scheduling.
1 Scheduling and Migration Lê Công Nguyên Trần Phúc Nguyên Phan Tiên Khôi.
Describe the concept of lightweight process (LWP) and the advantages to using LWPs Lightweight process (LWP) lies in a hybrid form of user-level & kernel-level.
Computer Science Lecture 6, page 1 CS677: Distributed OS Processes and Threads Processes and their scheduling Multiprocessor scheduling Threads Distributed.
Threads Clients Servers Code Migration Software Agents Summary
EECS122 - UCB 1 CS 194: Distributed Systems Processes, Threads, Code Migration Computer Science Division Department of Electrical Engineering and Computer.
Computer Science Lecture 8, page 1 CS677: Distributed OS Code and Process Migration Motivation How does migration occur? Resource migration Agent-based.
1 Introduction to Load Balancing: l Definition of Distributed systems. Collection of independent loosely coupled computing resources. l Load Balancing.
Processes After today’s lecture, you are asked to know The basic concept of thread and process. What are the advantages of using multi-threaded client.
CS 603 Threads, Processes, and Agents March 18, 2002.
Processes After today’s lecture, you are asked to know The basic concept of thread and process. What are the advantages of using multi-threaded client.
Processes After today’s lecture, you are asked to know The basic concept of thread and process. What are the advantages of using multi-threaded client.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
PRASHANTHI NARAYAN NETTEM.
Give an example to show the advantages to using multithreaded Clients See page 142 of the core book (Tanebaum 2002).
1 Distributed Systems: Distributed Process Management – Process Migration.
Processes.
Summary :- Distributed Process Scheduling Prepared BY:- JAYA KALIDINDI.
I/O Systems ◦ Operating Systems ◦ CS550. Note:  Based on Operating Systems Concepts by Silberschatz, Galvin, and Gagne  Strongly recommended to read.
Load distribution in distributed systems
Distributed Scheduling
Computer System Architectures Computer System Software
Computer Engineering Department Distributed Systems Course
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
Processes, Threads and Virtualization
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
1 Previous lecture review n Out of basic scheduling techniques none is a clear winner: u FCFS - simple but unfair u RR - more overhead than FCFS may not.
Scheduling Basic scheduling policies, for OS schedulers (threads, tasks, processes) or thread library schedulers Review of Context Switching overheads.
Computer Science Lecture 8, page 1 CS677: Distributed OS Last Class Threads –User-level, kernel-level, LWPs Multiprocessor Scheduling –Cache affinity –Preemption.
Processes Chapter 3. Table of Contents Multithreading Clients and Servers Code Migration Software Agents (special topic)
Univ. of TehranDistributed Operating Systems1 Advanced Operating Systems University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Processes. Processes and threads Process forms a building block in distributed systems Processes granularity is not sufficient for distributed systems.
Processes Introduction to Operating Systems: Module 3.
Code Migration Russell T. Potee, III. Overview Why Code Migration? Code Migration Models Migration and Handling Resources Heterogeneous Systems Migration.
Computer Science Lecture 7, page 1 CS677: Distributed OS Multiprocessor Scheduling Will consider only shared memory multiprocessor Salient features: –One.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
1 Reasons for Migrating Code The principle of dynamically configuring a client to communicate to a server. The client first fetches the necessary software,
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Distributed (Operating) Systems -Virtualization- -Server Design Issues- -Process and Code Migration- Computer Engineering Department Distributed Systems.
Distributed (Operating) Systems -Processes and Threads-
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Processes, Threads and Virtualization Chapter The role of processes in distributed systems.
Distributed Scheduling Motivations: reduce response time of program execution through load balancing Goal: enable transparent execution of programs on.
Background Computer System Architectures Computer System Software.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Processes Chapter 3. Processes in Distributed Systems Processes and threads –Introduction to threads –Distinction between threads and processes Threads.
OPERATING SYSTEMS CS 3502 Fall 2017
Processes and threads.
Processes Chapter 3.
Introduction to Load Balancing:
Processes and Threads Processes and their scheduling
Fast Communication and User Level Parallelism
Load Balancing/Sharing/Scheduling Part II
CPU scheduling decisions may take place when a process:
Outline Announcement Distributed scheduling – continued
Virtual Memory: Working Sets
Process/Code Migration and Cloning
Presentation transcript:

Computer Science Lecture 7, page 1 CS677: Distributed OS Multiprocessor Scheduling Will consider only shared memory multiprocessor Salient features: –One or more caches: cache affinity is important –Semaphores/locks typically implemented as spin-locks: preemption during critical sections

Computer Science Lecture 7, page 2 CS677: Distributed OS Multiprocessor Scheduling Central queue – queue can be a bottleneck Distributed queue – load balancing between queue

Computer Science Lecture 7, page 3 CS677: Distributed OS Scheduling Common mechanisms combine central queue with per processor queue (SGI IRIX) Exploit cache affinity – try to schedule on the same processor that a process/thread executed last Context switch overhead –Quantum sizes larger on multiprocessors than uniprocessors

Computer Science Lecture 7, page 4 CS677: Distributed OS Parallel Applications on SMPs Effect of spin-locks: what happens if preemption occurs in the middle of a critical section? –Preempt entire application (co-scheduling) –Raise priority so preemption does not occur (smart scheduling) –Both of the above Provide applications with more control over its scheduling –Users should not have to check if it is safe to make certain system calls –If one thread blocks, others must be able to run

Computer Science Lecture 7, page 5 CS677: Distributed OS Distributed Scheduling: Motivation Distributed system with N workstations –Model each w/s as identical, independent M/M/1 systems –Utilization u, P(system idle)=1-u What is the probability that at least one system is idle and one job is waiting?

Computer Science Lecture 7, page 6 CS677: Distributed OS Implications Probability high for moderate system utilization –Potential for performance improvement via load distribution High utilization => little benefit Low utilization => rarely job waiting Distributed scheduling (aka load balancing) potentially useful What is the performance metric? –Mean response time What is the measure of load? –Must be easy to measure –Must reflect performance improvement

Computer Science Lecture 7, page 7 CS677: Distributed OS Design Issues Measure of load –Queue lengths at CPU, CPU utilization Types of policies –Static: decisions hardwired into system –Dynamic: uses load information –Adaptive: policy varies according to load Preemptive versus non-preemptive Centralized versus decentralized Stability:  => instability,         load balance –Job floats around and load oscillates

Computer Science Lecture 7, page 8 CS677: Distributed OS Components Transfer policy: when to transfer a process? –Threshold-based policies are common and easy Selection policy: which process to transfer? –Prefer new processes –Transfer cost should be small compared to execution cost Select processes with long execution times Location policy: where to transfer the process? –Polling, random, nearest neighbor Information policy: when and from where? –Demand driven [only if sender/receiver], time-driven [periodic], state-change-driven [send update if load changes]

Computer Science Lecture 7, page 9 CS677: Distributed OS Sender-initiated Policy Transfer policy Selection policy: newly arrived process Location policy: three variations –Random: may generate lots of transfers => limit max transfers –Threshold: probe n nodes sequentially Transfer to first node below threshold, if none, keep job –Shortest: poll N p nodes in parallel Choose least loaded node below T

Computer Science Lecture 7, page 10 CS677: Distributed OS Receiver-initiated Policy Transfer policy: If departing process causes load < T, find a process from elsewhere Selection policy: newly arrived or partially executed process Location policy: –Threshold: probe up to N p other nodes sequentially Transfer from first one above threshold, if none, do nothing –Shortest: poll n nodes in parallel, choose node with heaviest load above T

Computer Science Lecture 7, page 11 CS677: Distributed OS Symmetric Policies Nodes act as both senders and receivers: combine previous two policies without change –Use average load as threshold Improved symmetric policy: exploit polling information –Two thresholds: LT, UT, LT <= UT –Maintain sender, receiver and OK nodes using polling info –Sender: poll first node on receiver list … –Receiver: poll first node on sender list …

Computer Science Lecture 7, page 12 CS677: Distributed OS Case Study: V-System (Stanford) State-change driven information policy –Significant change in CPU/memory utilization is broadcast to all other nodes M least loaded nodes are receivers, others are senders Sender-initiated with new job selection policy Location policy: probe random receiver, if still receiver, transfer job, else try another

Computer Science Lecture 7, page 13 CS677: Distributed OS Sprite (Berkeley) Workstation environment => owner is king! Centralized information policy: coordinator keeps info –State-change driven information policy –Receiver: workstation with no keyboard/mouse activity for 30 seconds and # active processes < number of processors Selection policy: manually done by user => workstation becomes sender Location policy: sender queries coordinator WS with foreign process becomes sender if user becomes active: selection policy=> home workstation

Computer Science Lecture 7, page 14 CS677: Distributed OS Sprite (contd) Sprite process migration –Facilitated by the Sprite file system –State transfer Swap everything out Send page tables and file descriptors to receiver Demand page process in Only dependencies are communication-related –Redirect communication from home WS to receiver

Computer Science Lecture 7, page 15 CS677: Distributed OS Code and Process Migration Motivation How does migration occur? Resource migration Agent-based system Details of process migration

Computer Science Lecture 7, page 16 CS677: Distributed OS Motivation Key reasons: performance and flexibility Process migration (aka strong mobility) –Improved system-wide performance – better utilization of system-wide resources –Examples: Condor, DQS Code migration (aka weak mobility) –Shipment of server code to client – filling forms (reduce communication, no need to pre-link stubs with client) –Ship parts of client application to server instead of data from server to client (e.g., databases) –Improve parallelism – agent-based web searches

Computer Science Lecture 7, page 17 CS677: Distributed OS Motivation Flexibility –Dynamic configuration of distributed system –Clients don’t need preinstalled software – download on demand

Computer Science Lecture 7, page 18 CS677: Distributed OS Migration models Process = code seg + resource seg + execution seg Weak versus strong mobility –Weak => transferred program starts from initial state Sender-initiated versus receiver-initiated Sender-initiated (code is with sender) –Client sending a query to database server –Client should be pre-registered Receiver-initiated –Java applets –Receiver can be anonymous

Computer Science Lecture 7, page 19 CS677: Distributed OS Who executes migrated entity? Code migration: –Execute in a separate process –[Applets] Execute in target process Process migration –Remote cloning –Migrate the process

Computer Science Lecture 7, page 20 CS677: Distributed OS Models for Code Migration Alternatives for code migration.

Computer Science Lecture 7, page 21 CS677: Distributed OS Do Resources Migrate? Depends on resource to process binding –By identifier: specific web site, ftp server –By value: Java libraries –By type: printers, local devices Depends on type of “attachments” –Unattached to any node: data files –Fastened resources (can be moved only at high cost) Database, web sites –Fixed resources Local devices, communication end points

Computer Science Lecture 7, page 22 CS677: Distributed OS Resource Migration Actions Actions to be taken with respect to the references to local resources when migrating code to another machine. GR: establish global system-wide reference MV: move the resources CP: copy the resource RB: rebind process to locally available resource UnattachedFastenedFixed By identifier By value By type MV (or GR) CP ( or MV, GR) RB (or GR, CP) GR (or MV) GR (or CP) RB (or GR, CP) GR RB (or GR) Resource-to machine binding Process-to- resource binding

Computer Science Lecture 7, page 23 CS677: Distributed OS Migration in Heterogeneous Systems Systems can be heterogeneous (different architecture, OS) –Support only weak mobility: recompile code, no run time information –Strong mobility: recompile code segment, transfer execution segment [migration stack] –Virtual machines - interpret source (scripts) or intermediate code [Java]