Lottery Scheduling Ish Baid.

Slides:



Advertisements
Similar presentations
LOTTERY SCHEDULING: FLEXIBLE PROPORTIONAL-SHARE RESOURCE MANAGEMENT
Advertisements

Embedded System Lab. 1 Embedded system Lab.
Worst-case Fair Weighted Fair Queueing (WF²Q) by Jon C.R. Bennett & Hui Zhang Presented by Vitali Greenberg.
Chapter 8 – Processor Scheduling Outline 8.1 Introduction 8.2Scheduling Levels 8.3Preemptive vs. Nonpreemptive Scheduling 8.4Priorities 8.5Scheduling Objectives.
G Robert Grimm New York University Lottery Scheduling.
Project 2 – solution code
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
Scheduling CS623, Lecture 7 3/9/2004 © Joel Wein, updated by T. Suel.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Little’s Theorem Examples Courtesy of: Dr. Abdul Waheed (previous instructor at COE)
Lottery Scheduling: Flexible Proportional-Share Resource Management Sim YounSeok C. A. Waldspurger and W. E. Weihl.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
1 Scheduling The part of the OS that makes the choice of which process to run next is called the scheduler and the algorithm it uses is called the scheduling.
Fair Queueing. 2 First-Come-First Served (FIFO) Packets are transmitted in the order of their arrival Advantage: –Very simple to implement Disadvantage:
Virtual-Time Round-Robin: An O(1) Proportional Share Scheduler By Jason Nieh, etc Xiaojun Wang 10/07/2005.
Scheduling Determines which packet gets the resource. Enforces resource allocation to each flows. To be “Fair”, scheduling must: –Keep track of how many.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Queue Scheduling Disciplines
CS333 Intro to Operating Systems Jonathan Walpole.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Energy-aware QoS packet scheduling.
Stride Scheduling: Deterministic Proportional-Share Resource Management Carl A. Waldspurger, William E. Weihl MIT Laboratory for Computer Science Presenter:
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
OPERATING SYSTEMS CS 3502 Fall 2017
Multimedia Systems Operating System Presentation On
Khiem Lam Jimmy Vuong Andrew Yang
QoS & Queuing Theory CS352.
Introduction to Load Balancing:
Tao Zhu1,2, Chengchun Shu1, Haiyan Yu1
Sriram Lakshmanan Zhenyun Zhuang
Wireless Fair Scheduling
Advanced Operating Systems (CS 202) Scheduling (2)
Chapter 2 Scheduling.
Chapter 8 – Processor Scheduling
Operating Systems Processes Scheduling.
Chapter 2.2 : Process Scheduling
Lottery Scheduling: Flexible Proportional-Share Resource Management
Stratified Round Robin: A Low Complexity Packet Scheduler with Bandwidth Fairness and Bounded Delay Sriram Ramabhadran Joseph Pasquale Presented by Sailesh.
Scheduling in Interactive Systems
Chapter 6: CPU Scheduling
CONGESTION CONTROL.
CS140 – Operating Systems Midterm Review
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Scheduling Algorithms in Broad-Band Wireless Networks
3: CPU Scheduling Basic Concepts Scheduling Criteria
Fair Queueing.
Chapter 2: The Linux System Part 3
Chapter5: CPU Scheduling
Operating systems Process scheduling.
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Virtual-Time Round-Robin: An O(1) Proportional Share Scheduler
Outline Scheduling algorithms Multi-processor scheduling
COT 4600 Operating Systems Fall 2009
Variations of Weighted Fair Queueing
Chapter 9 Uniprocessor Scheduling
Multithreaded Programming
Chapter 11 I/O Management and Disk Scheduling
PROCESSES & THREADS ADINA-CLAUDIA STOICA.
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
CSE 542: Operating Systems
Module 5: CPU Scheduling
Fair Queueing.
کنترل جریان امیدرضا معروضی.
Presentation transcript:

Lottery Scheduling Ish Baid

Outline Problem What is it? Implementation Benefits Experimentation Findings Other Scheduling Algorithms Conclusion

Problem “Scheduling computations in multi-threaded systems is complex, and challenging problem.” Resources are scarce and must be allocated well Scheduling based on priority can be absolute, arbitrary, and inadequate for insulating resource allocation policies Overall...limits throughput and response time Scarce resources must be multiplexed to service requests of varying importance, and the policy chosen to manage this multiplexing can have an enormous impact on throughput and response time.

What is it? Random, uniform way to allocate resources Select ticket and allocate resource to client with winning ticket Use various techniques to change ticket proportions throughout We have developed lottery scheduling, a novel randomized mechanism that provides responsive control over the relative execution rates of computations. Lottery scheduling efficiently implements proportional-share resource management — the resource consumption rates of active computations are proportional to the relative shares that they are allocated Lottery scheduling is a randomized resource allocation mechanism. Resource rights are represented by lottery tickets. 1 Each allocation is determined by holding a lottery; the resource is granted to the client with the winning ticket. This effectively allocates resources to competing clients in proportion to the number of tickets that they hold. Scheduling by lottery is probabilistically fair Thus, a client’s throughput is proportional to its ticket allocation, with accuracy that improves with p n. Since any client with a non-zero number of tickets will eventually win a lottery, the conventional problem of starvation does not exist.

Implementation Select random lottery number n, traverse using: List Tree Currencies Only active as long as thread is on run queue Can be used encapsulated rights for a resource Client-server computation Client task will often transfer tickets to server task Each currency maintains an active amount sum for all of its issued tickets. A ticket is active while it is being used by a thread to compete in a lottery. When a thread is removed from the run queue, its tickets are deactivated; they are reactivated when the thread rejoins the run queue.3 If a ticket deactivation changes a currency’s active amount to zero, the deactivation propagates to each of its backing tickets. Similarly, if a ticket activation changes a currency’s active amount from zero, the activation propagates to each of its backing tickets. A currency’s value is computed by summing the value of its backing tickets. A ticket’s value is computed by multiplying the value of the currency in which it is denominated by its share of the active amount issued in that currency.

Module Resource Management Ticket transfers Used to unblock threads Inflation Violates modularity Useful among trusted clients Can be valuable if a task doesn’t need its tickets Provides adjusted resource allocation without explicit communication Compensation tickets Ensures fairness Inflates value of tickets in order to reach desired resource allocation The explicit representation of resource rights as lottery tickets provides a convenient substrate for modular resource management. Tickets can be used to insulate the resource management policies of independent modules, because each ticket probabilistically guarantees its owner the right to a worst-case resource consumption rate. Since lottery tickets abstractly encapsulate resource rights, they can also be treated as first-class objects that may be transferred in messages. Ticket transfers are explicit transfers of tickets from one client to another. Ticket transfers can be used in any situation where a client blocks due to some dependency Ticket inflation is an alternative to explicit ticket transfers in which a client can escalate its resource rights by creating more lottery tickets. A client which consumes only a fraction f of its allocated resource quantum can be granted a compensation ticket that inflates its value by 1=f until the client starts its next quantum. This ensures that each client’s resource consumption, equal to f times its per-lottery win probability p, is adjusted by 1=f to match its allocated share p. Without compensation tickets, a client that does not consume its entire allocated quantum would receive less than its entitled share of the processor For example, suppose threads A and B each hold tickets valued at 400 base units. Thread A always consumes its entire 100 millisecond time quantum, while thread B uses only 20 milliseconds before yielding the processor. Since both A and B have equal funding, they are equally likely to win a lottery when both compete for the processor. However, thread B uses only f = 1=5 of its allocated time, allowing thread A to consume five times as much CPU, in violation of their 1 : 1 allocation ratio. To remedy this situation, thread B is granted a compensation ticket valued at 1600 base units when it yields the processor. When B next competes for the processor, its total funding will be 400=f = 2000 base units. Thus, on average B will win the processor lottery five times as often as A, each time consuming 1=5 as much of its quantum as A, achieving the desired 1 : 1 allocation ratio.

Benefits Fairness Flexible Control over resource allocation Prevents typical starvation Flexible Control over resource allocation Allow dynamic adjusting of resource allocation in relation to ticket proportions Module Resource Management Encapsulates resource rights Low overhead Lottery scheduling also provides excellent support for modular resource management

Experimentation Fairness Flexible Control Testing if tasks use resources in relation to their allocation Very accurate for the most part Flexible Control Multimedia applications How tasks adjust to account for error

Fairness Test | 2 : 1 Allocation

Comments “I don't see how it is impossible for starvation to occur. Though increasingly unlikely, it seems that it is still possible.” “Figure 5 clearly demonstrates that fairness varies quite a bit. This can have a detrimental effect on latency-sensitive applications”

Other Scheduling Algorithms Weighted Fair Queue Data packet Scheduling Algorithm Allows for specification of capacity that will be given Each flow is protected from one another Calculates departure date for packet -> uses that date to decide which packet to emit Stride Scheduling Improved version of Lottery scheduling More accurate allocation of resources based on proportions Lower response time Rate-based flow control for networks Provides stronger deterministic guarantees Weighted fair queueing (WFQ) is a data packet scheduling algorithm used by network schedulers. WFQ is both a packet based implementation of the generalized processor sharing policy (GPS), and a natural generalization of fair queuing (FQ): whereas FQ shares the link's capacity in equal subparts, WFQ allows to specify, for each flow, which fraction of the capacity will be given. Weighted fair queuing (WFQ) is also known as Packet-by-Packet GPS (PGPS or P-GPS) since it approximates generalized processor sharing "to within one packet transmission time, regardless of the arrival patterns."[1]

Stride vs Lottery

Conclusion Lottery scheduling provides… Other scheduling algorithms Fairness Flexible Control Modular Resource Management Other scheduling algorithms WFQ Flexible scheduling for network packets Stride Improved lottery scheduling Note: Research is old - Modern scheduling algorithms have improved greatly Any questions?