University of Pittsburgh

Slides:



Advertisements
Similar presentations
Coordination Mechanisms for Unrelated Machine Scheduling Yossi Azar joint work with Kamal Jain Vahab Mirrokni.
Advertisements

Online Multi-Commodity Flow with High Demands Guy Even EE School, Tel-Aviv University Moti Medina EE School, Tel-Aviv University.
© 2004, D. J. Foreman 1 Scheduling & Dispatching.
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
1 SOFSEM 2007 Weighted Nearest Neighbor Algorithms for the Graph Exploration Problem on Cycles Eiji Miyano Kyushu Institute of Technology, Japan Joint.
S ELFISH M IGRATE : A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors Janardhan Kulkarni, Duke University Sungjin Im (UC Merced.
1 Better Scalable Algorithms for Broadcast Scheduling Ravishankar Krishnaswamy Carnegie Mellon University Joint work with Nikhil Bansal and Viswanath Nagarajan.
Maryam Elahi Fairness in Speed Scaling Design Joint work with: Carey Williamson and Philipp Woelfel.
1 Scheduling on Heterogeneous Machines: Minimize Total Energy + Flowtime Ravishankar Krishnaswamy Carnegie Mellon University Joint work with Anupam Gupta.
1 Scheduling Jobs with Varying Parallelizability Ravishankar Krishnaswamy Carnegie Mellon University.
A General Approach to Online Network Optimization Problems Seffi Naor Computer Science Dept. Technion Haifa, Israel Joint work: Noga Alon, Yossi Azar,
Minimizing Flow Time on Multiple Machines Nikhil Bansal IBM Research, T.J. Watson.
Online Function Tracking with Generalized Penalties Marcin Bieńkowski Institute of Computer Science, University of Wrocław, Poland Stefan Schmid Deutsche.
Paging for Multi-Core Shared Caches Alejandro López-Ortiz, Alejandro Salinger ITCS, January 8 th, 2012.
Energy, Energy, Energy  Worldwide efforts to reduce energy consumption  People can conserve. Large percentage savings possible, but each individual has.
International Graduate School of Dynamic Intelligent Systems, University of Paderborn Improved Algorithms for Dynamic Page Migration Marcin Bieńkowski.
Yossi Azar Tel Aviv University Joint work with Ilan Cohen Serving in the Dark 1.
Speed Scaling to Manage Energy and Temperature Nikhil Bansal (IBM Research) Tracy Kimbrel (IBM) and Kirk Pruhs (Univ. of Pittsburgh)
1 Pruhs, Woeginger, Uthaisombut 2004  Qos Objective: Minimize total flow time  Flow time f i of a job i is completion time C i – r i  Power Objective:
1 Server Scheduling in the L p norm Nikhil Bansal (CMU) Kirk Pruhs (Univ. of Pittsburgh)
A Maiden Analysis of Longest Wait First Jeff Edmonds York University Kirk Pruhs University of Pittsburgh.
Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.
Jennifer Campbell November 30,  Problem Statement and Motivation  Analysis of previous work  Simple - competitive strategy  Near optimal deterministic.
Radix Sort and Hash-Join for Vector Computers Ripal Nathuji 6.893: Advanced VLSI Computer Architecture 10/12/00.
1 5. Abstract Data Structures & Algorithms 5.6 Algorithm Evaluation.
Loss-Bounded Analysis for Differentiated Services. By Alexander Kesselman and Yishay Mansour Presented By Sharon Lubasz
APPROX and RANDOM 2006 Online Algorithms to Minimize Resource Reallocation and Network Communication Sashka Davis, UCSD Jeff Edmonds, York University,
With Extra Bandwidth and Time for Adjustment TCP is Competitive J. Edmonds, S. Datta, and P. Dymond.
Multicast Pull Scheduling Kirk Pruhs. The Big Problem Movie Distribution Database Replication via Internet Harry Potter Book Download Software Download.
Ben Moseley Sungjin Im Goals of Talk  Make you aware of/interested in broadcast sched  Highlight known results, key open questions, some recent results.
1 Approximation Algorithms for Generalized Scheduling Problems Ravishankar Krishnaswamy Carnegie Mellon University joint work with Nikhil Bansal, Anupam.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Concurrency and Performance Based on slides by Henri Casanova.
Determining Optimal Processor Speeds for Periodic Real-Time Tasks with Different Power Characteristics H. Aydın, R. Melhem, D. Mossé, P.M. Alvarez University.
Scheduling Parallel DAG Jobs to Minimize the Average Flow Time K. Agrawal, J. Li, K. Lu, B. Moseley.
1 5. Abstract Data Structures & Algorithms 5.6 Algorithm Evaluation.
CPU SCHEDULING.
CS 425 / ECE 428 Distributed Systems Fall 2016 Nov 10, 2016
Interquery Parallelism
CS 425 / ECE 428 Distributed Systems Fall 2017 Nov 16, 2017
Maximum Matching in the Online Batch-Arrival Model
On Scheduling in Map-Reduce and Flow-Shops
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Greedy Algorithms / Interval Scheduling Yin Tat Lee
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
TDC 311 Process Scheduling.
New Scheduling Algorithms: Improving Fairness and Quality of Service
Chapter 6: CPU Scheduling
Chapter 10 Operating Systems.
Chapter 5: CPU Scheduling
University of Pittsburgh
György Dósa – M. Grazia Speranza – Zsolt Tuza:
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Scheduling & Dispatching
Chapter 6: CPU Scheduling
Parallel Speedup.
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Scheduling & Dispatching
Power-Aware Speed Scaling in Processor Sharing Systems
Module 5: CPU Scheduling
An Optimal Lower Bound for Buffer Management in Multi-Queue Switches
CPU Scheduling CSE 2431: Introduction to Operating Systems
Presentation transcript:

University of Pittsburgh CO 2000 Scalably Scheduling Processes with Arbitrary Speedup Curves (Better Scheduling in the Dark) Jeff Edmonds York University Kirk Pruhs University of Pittsburgh SODA 2009

CO 2000 Jeff Edmonds York University Every Deterministic Nonclairvoyant Scheduler has a Suboptimal Load Threshold Jeff Edmonds York University Submitted to STOC 2009

The Scheduling Problem CO 2000 Allocate p processors to a stream of n jobs: Measure of Quality: Competitive Ratio: F ( A I ) = P n i 1 c ¡ r R t d m a x I F ( A ) O p t

Examples of Schedulers CO 2000 Shortest Remaining Processing Time (SRPT) Shortest Elapsed Time First (SETF)

? Online Future All Knowing All Powerful Online: Optimal: Competitive: Shortest Remaining Processing Time (SRPT) m a x I F ( S J ) O p t = 1 Competitive:

? Nonclairvoyant Future All Knowing All Powerful Nonclairvoyant: CO 2000 Nonclairvoyant: ? Future Optimal: All Knowing All Powerful Not Competitive:

Nonclairvoyant ­ ­ = n = ­ = F ( S E T ) O p t F ( E q u i ) n l F ( N CO 2000 F ( S E T ) O p t = ­ n I u O p t F ( E q u i ) = ­ n l I F ( N o n c l a i r v y t ) O p = ­ [MPT]

Performance vs Load m a x = ­ ( n ) A I F O p t F ( A I ) F ( O p t I CO 2000 F ( A I ) F ( O p t I ) = ­ ( n ) m a x I F O p t A Average Performance Load I

Performance vs Load m a x = O ( 1 ) A I F ( O p t ) F ( A I ) F ( O p CO 2000 F ( A I ) F ( O p t I ) I Average Performance Load = O ( 1 ) s c m a x I F ( O p t ) A

Performance vs Load m a x O = F ( A I ) I F ( O p t ) F ( A I ) F ( O CO 2000 F ( A s I ) F ( O p t I ) I Average Performance Load c F ( A s I ) = O Load m a x I F ( O p t )

Resource Augmentation CO 2000 Nonclairvoyant: Future ? Extra Speed Optimal: All Knowing All Powerful Competitive:

Resource Augmentation CO 2000 S E T F O p t I n u ( 1 + ² ) = £ [KP] 2 O p t I n u E q i 2 + ² F ( ) 1 = £ [E] Required

Sublinear Nondecreasing Speedup Functions CO 2000 A set of jobs: Each job has phases: Each phase: Work: Speedup function: Nondecreasing Sublinear Examples: J = f 1 ; : n g J i = ­ 1 ; : q ® J q i = h W ; ¡ W q i ¡ q i

Sublinear Nondecreasing Speedup Functions CO 2000 Nonclairvoyant: Future ? Extra Speed Optimal: All Knowing All Powerful Competitive?

Sublinear Nondecreasing Speedup Functions CO 2000 Arrives over time Currently Alive O p t Opt gives all its resources to the parallelizable job and hence competes them as they arrive. The sequential jobs complete with no resources.

Sublinear Nondecreasing Speedup Functions CO 2000 Arrives over time Currently Alive S E T F s Shortest Elapsed Time First (SETF) gives all its resources to a sequential job, wasting it. The parallelizable jobs, getting no resources never complete. F ( S E T s ) O p t 1 = ­ n

Sublinear Nondecreasing Speedup Functions CO 2000 Arrives over time Currently Alive ` ² jobs EQUI waists є resources, but has є extra. Equi spreads its resources fairly. Most are wasted on the sequential jobs. The parallelizable jobs don’t get enough and fall behind. E q u i 1 + ² F ( E q u i 1 + ² ) O p t = `

Sublinear Nondecreasing Speedup Functions CO 2000 I n p u t O p t E q u i 2 + ² F ( E q u i 2 + ² ) O p t 1 = £ Required [E]

nt jobs currently alive sorted by arrival time. CO 2000 S E T F < L A P q u i nt jobs currently alive sorted by arrival time. Latest Arrival Processor Sharing O p t 1 job S E T F But may be sequential. L A P S jobs ¯ n t Compromise 2 + ² Too thin & needs speed jobs E q u i n t New result [EP] F ( L A P S h ¯ ; 1 + ² i ) O p t = £ Speed

nt jobs currently alive sorted by arrival time. CO 2000 S E T F < L A P q u i nt jobs currently alive sorted by arrival time. Latest Arrival Processor Sharing O p t 1 job S E T F ¯ ¼ F ( S E T 1 + ² ) O p t = £ L A P S jobs ¯ n t Compromise jobs E q u i n t F ( E q u i 2 + ² ) O p t 1 = £ ¯ New result [EP] F ( L A P S h ¯ ; 1 + ² i ) O p t = £

Backwards Quantifies 9 A l g 8 ² = 8 ² 9 A l g = £ 8 A l g 9 ² = ! = £ CO 2000 Desired result: 9 A l g 8 ² F ( 1 + ) O p t = Obtained: 8 ² 9 A l g F ( 1 + ) O p t = £ 2 ¯ = 1 2 ² New result [E STOC09?] 8 A l g 9 ² F ( 1 + ) O p t = ! ² = 1 2 ¯ New result [EP] F ( L A P S h ¯ ; 1 + ² i ) O p t = £ ’

Performance vs Load Threshold CO 2000 Defn: A set of jobs has load if i.e. can be optimally handled with speed L. L 2 [ ; 1 ] F ( O p t L I ) = 1 Defn: F ¯ ( L ) = m a x I w i t h l o d A P S ; 1 Equi (β=1) has the best performance, but it only can handle half load. L = 1 2 Small β can handle almost full load L = 1 ¡ ¯ 1 ¯ . but its performance degrades with L

Lower Bound O p t ¯ n L A P S 1 + P ( ½ ) = ¯ ¢ ³ ´ ¯ = CO 2000 O p t jobs ¯ n t L A P S Too thin & needs speed 1 + P i · n t ( ½ ) 2 = ¯ ¢ ³ 1 ´ Measure of resource concentration ¯ t = 1 n P i · ( ½ ) 2

Lower Bound O p t ¯ n 1 + ¯ = ¯ = l i m CO 2000 O p t To concentrated, may be sequential. Performance = 1/β β 0: jobs ¯ n t Too thin & needs speed 1 + Constant β: Measure of resource concentration Alg specifies processor allocation for each job when nt jobs alive ¯ t = 1 n P i · ( ½ ) 2 ¯ = l i m t ! 1

Lower Bound F l o w = P 2 t F l o w = P ( 1 + ) t ti CO 2000 ti I n p u t O p t A l g Opt ignores extra jobs & competes stream Alg attempts all & completes none F l o w = P i 2 t F l o w = P i ( 1 + ) t Oops: We need an extra restriction that we can switch the job the alg is working the most on to being sequential. Likely because the alg favors the more recent jobs.

Lower Bound Alg specifies processor Compute work wi allocation for each job when nt jobs alive Compute work wi completed on each job Eg Equi or Lapsβ Arbitrary Adv gives promises no job completes Adv gives work wi to job so it does not complete Brouwer's fixed point theorem Non trivial algebra I n p u t O p t A l g Time ti between jobs = wi so Opt can complete as arrive Compute competitive ratio

Proof Sketch CO 2000 F ( L A P S h ¯ ; 1 + ² i ) O p t = £

Proof Sketch CO 2000 In the worst cast inputs, each phase is either sequential or parallelizable. LAPS LAPS

Proof Sketch CO 2000

Potential Function + · c F ( L A P S ) + © ¡ · c O p t Define a potential function Φt. It says how much debt Laps has in the bank. Φ0 = Φfinal = 0. Φt does not increase as jobs arrive or complete. At other times, Result follows by integrating d F ( L A P S ) t + © · c O p F ( L A P S ) + © f i n a l ¡ · c O p t

Potential Function = © = ° P ¢ m a x ( ; ) d © t i 2 [ n ] x CO 2000 nt jobs currently alive sorted by arrival time. Job arrives: d © t = nt+1 Coefficient: 1 2 3 … nt Parallelizable work done by Opt not by LAPS: x 1 2 3 … n t © = ° P i 2 [ n t ] ¢ m a x ( ; )

Potential Function · © = ° P ¢ m a x ( ; ) d © t i 2 [ n ] x x x x x x CO 2000 n jobs currently alive sorted by arrival time. i Coefficient: 1 2 3 … i+1 d © t · nt-1 i nt Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) Job completes:

Potential Function · ° ¢ n 1 © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 Opt works: O p t d © · ° ¢ n 1 2 3 … nt Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t Speed of Opt © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; )

Potential Function · ° ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) © CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt L A P S x ( 1 ¡ ¯ ) n t LAPS works: d © · ° i 2 [ ; b ` ] ¢ + ² Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t Less work not done by Laps © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; )

Potential Function · ° P ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt L A P S ( 1 ¡ ¯ ) n t Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t x ( 1 ¡ ¯ ) n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) Speed of Laps d © t · ° P i 2 [ ( 1 ¡ ¯ ) n ; b ` ] ¢ + ² LAPS works:

Potential Function · ° P ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt L A P S ( 1 ¡ ¯ ) n t Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t x ( 1 ¡ ¯ ) n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) Shared with jobs. ¯ n t d © t · ° P i 2 [ ( 1 ¡ ¯ ) n ; b ` ] ¢ + ² LAPS works:

Potential Function · ° P ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt Range of jobs worked on. ¯ n t L A P S ( 1 ¡ ¯ ) n t Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t x ( 1 ¡ ¯ ) n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) d © t · ° P i 2 [ ( 1 ¡ ¯ ) n ; b ` ] ¢ + ² LAPS works:

Potential Function · ° P ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt L A P S ( 1 ¡ ¯ ) n t Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x n t x n t x n t x ( 1 ¡ ¯ ) n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) d © t · ° P i 2 [ ( 1 ¡ ¯ ) n ; b ` ] ¢ + ² LAPS works: # of jobs sequential under LAPS LAPS is a head, i.e. x i · b ` t =

Potential Function · ° P ¢ © = ° P ¢ m a x ( ; ) © = ° P ¢ m a x ( ; ) CO 2000 n jobs currently alive sorted by arrival time. Coefficient: 1 2 3 … nt L A P S ( 1 ¡ ¯ ) n t Parallelizable work done by Opt not by LAPS: … x 1 x 2 x 3 x ( 1 ¡ ¯ ) n t x n t x n t x n t © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) © = ° P i 2 [ n t ] ¢ m a x ( ; ) d © t · ° P i 2 [ ( 1 ¡ ¯ ) n ; b ` ] ¢ + ² LAPS works: # of jobs sequential under LAPS LAPS is a head, i.e. x i · b ` t =

Potential Function b ` £ ( ) n · · N · ° ¢ n 1 + P + · c b ` = d F ( L CO 2000 d F ( L A P S ) t + © · c O p b ` £ ( 1 ¯ ² ) n t · · N t # jobs alive under Laps resulting competitive ratio # jobs alive under Opt A page of math later, and the proof is done. LAPS works: Opt works: d © t · ° ¢ n t 1 + P i 2 [ ( ¡ ¯ ) ; b ` ] ² # of jobs sequential under LAPS LAPS is a head, i.e. x i · b ` t =

Conclusions = £ 8 A l g 9 ² = ! [EP] Resource Augmentation: nt jobs currently alive sorted by arrival time. L A P S jobs ¯ n t Latest Arrival Processor Sharing [EP] Resource Augmentation: F ( L A P S h ¯ ; 1 + ² i ) O p t = £ [E]: Suboptimal Load Threashold 8 A l g 9 ² F ( 1 + ) O p t = !

Other Models Same Techniques CO 2000 Other Models Same Techniques Broadcast: Many page requests [EP:SODA02, EP:SODA03] serviced simultaneously TCP: Add Incr & Mult Decr ~ EQUI Speed Scaling: Each algorithm can dynamically choose its speed s, but it must pay for it with energy P(s) = sα [EDD:PAA03, E:Latin 04] [CELLSP:STACS09,EP??]

CO 2000 Thank you

Conclusions = £ 8 A l g 9 ² = ­ n ¯ = L A P S £ ( ) L A P S £ ( l n p nt jobs currently alive sorted by arrival time. L A P S jobs ¯ n t Latest Arrival Processor Sharing [EP] Resource Augmentation: F ( L A P S h ¯ ; 1 + ² i ) O p t = £ [EP]: 8 A l g 9 ² F ( 1 + ) O p t = ­ n ¯ = 1 ® -competitive for L A P S is [CELLMP]: Speed Scaling: £ ( 2 ) L A P S ¯ £ ( l n p ) -competitive [EP]: Speed Scaling with multi-processors.: is

Scheduling in the Dark Multiprocessor – Batch CO 2000 Multiprocessor – Batch Edmonds, Chinn, Brecht, Deng STOC 97 - Speed 2+ε Edmonds STOC 99 - Speed 1+ε Edmonds, Pruhs SODA 09 - A ε not competitive ? STOC 09 Multicast - reduction SODA 02 - LWF SODA 04 TCP – one bottle neck Edmonds, Datta, Dymond PAA 03 - General Network Latin 04 Speed Scaling – one proc. Chan, Edmonds, Lam, Lee, Marchetti-Spaccamela, and Pruhs ? STACS 09 - multi proc. being written

Nonclairvoyant Speed Scaling for Flow and Energy CO 2000 Nonclairvoyant Speed Scaling for Flow and Energy Ho-Leung Chan (Pittsburgh) Jeff Edmonds (York) Tak-Wah Lam (Hong Kong Lap-Kei Lee (Hong Kong) A. Marchetti-Spaccamela (Roma) Kirk Pruhs (Pittsburgh) Submitted to STACS 2009

Speed Scaling 9 s = ¯ = L A P S £ ( ) L A P S : P ( ) = CO 2000 s P ( ) = ® Each algorithm can dynamically choose its speed , but it must pay for it with energy F ( A l g ) + E O p t = = R t n h A l g ; i d + ( s ) ® O p Known: 3-competitive clairvoyant alg [BCP]. 9 ¯ = 1 ® -competitive for L A P S is New [CELLMP]: £ ( 3 ) L A P S ¯ : s h L A P S ; t i = ( n ) 1 ® Partition speed to the ¯ n latest arriving jobs.

Multi processors & Parallel-Sequential Jobs Speed Scaling CO 2000 s P ( ) = ® Each algorithm can dynamically choose its speed , but it must pay for it with energy F ( A l g ) + E O p t = R t n h A l g ; i d + ( s ) ® O p Known: 3-competitive clairvoyant alg [BCP]. 9 ¯ = 1 ® -competitive for L A P S is New [CELLMP]: £ ( 3 ) Only for one processor or fully parallelizable jobs New [EP]: Multi processors & Parallel-Sequential Jobs

Speed Scaling CO 2000 s P ( ) = ® Each algorithm can dynamically choose its speed , but it must pay for it with energy s Processors Model: Dynamically allocate pi unit speed processors to job Ji. Energy is (#processors)α per time. α2 competitive. Individual Speeds Model: Dynamically partition the p processors among the jobs. Run processor k at speed sk. Energy is skα per processor per time. log p competitive.

University of Pittsburgh CO 2000 Speed Scaling for Flow and Energy with mult-Processors and Arbitrary Speedup Curves Jeff Edmonds York University Kirk Pruhs University of Pittsburgh Being Written

Performance vs Load 2 [ ; ] Defn: A set of jobs has load s if CO 2000 Defn: A set of jobs has load s if i.e. can be optimally handled with speed F ( O p t s I ) = 1 2 [ ; ] Defn: F ¯ ( s ) = m a x I w i t h l o d L A P S ; 1 = m a x I F ( L A P S h ¯ ; 1 i ) O p t s s L A P S O p t = 1 + ¯ ² F ( I ) 4 s L A P S O p t = 1 F ¯ ( ) 4 ¡ f o r < +

Performance vs Load 2 [ ; ] Defn: A set of jobs has load s if CO 2000 Defn: A set of jobs has load s if i.e. can be optimally handled with speed F ( O p t s I ) = 1 2 [ ; ] Defn: F ¯ ( s ) = m a x I w i t h l o d L A P S ; 1 F ¯ ( s ) = 4 1 ¡ f o r < + Equi (β=1) has the best performance, but it only can handle half load. s = 1 2 Small β can handle almost full load s ¼ 1 ¡ ¯ 1 ¯ . but its performance degrades with

Speed Scaling 9 s = ¯ = L A P S £ ( ) ­ ( ® ) ! ( 1 ) P ( ) = CO 2000 s P ( ) = ® Each algorithm can dynamically choose its speed , but it must pay for it with energy F ( A l g ) + E O p t = R t n h A l g ; i d + ( s ) ® O p Known: 3-competitive clairvoyant alg [BCP]. 9 ¯ = 1 ® -competitive for L A P S is New [CELLMP]: £ ( 2 ) Every nonclairvoyant algorithm is ­ ( ® ) -competitive for P s = ! ( 1 ) -competitive for P s =