Evaluating Heuristics for the Fixed-Predecessor Subproblem of Pm | prec, p j = 1 | C max.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Informed search algorithms
Review: Search problem formulation
Heuristic Search techniques
CALTECH CS137 Fall DeHon 1 CS137: Electronic Design Automation Day 19: November 21, 2005 Scheduling Introduction.
P3 / 2004 Register Allocation. Kostis Sagonas 2 Spring 2004 Outline What is register allocation Webs Interference Graphs Graph coloring Spilling Live-Range.
ECE 667 Synthesis and Verification of Digital Circuits
This lecture topic (two lectures) Chapter 6.1 – 6.4, except
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
Architecture-dependent optimizations Functional units, delay slots and dependency analysis.
Algorithm Design Techniques: Greedy Algorithms. Introduction Algorithm Design Techniques –Design of algorithms –Algorithms commonly used to solve problems.
Chapter 3: Planning and Scheduling Lesson Plan
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Solving Problem by Searching
Greedy Algorithms Greed is good. (Some of the time)
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 14: March 3, 2004 Scheduling Heuristics and Approximation.
Contents College 4 §4.1, §4.2, §4.4, §4.6 Extra literature on resource constrained project scheduling (will be handed out)
ISE480 Sequencing and Scheduling Izmir University of Economics ISE Fall Semestre.
EE 553 Integer Programming
Lecture 10: Integer Programming & Branch-and-Bound
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
Planning under Uncertainty
1 IOE/MFG 543 Chapter 3: Single machine models (Sections 3.1 and 3.2)
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
CSE 421 Algorithms Richard Anderson Lecture 6 Greedy Algorithms.
1 IOE/MFG 543 Chapter 7: Job shops Sections 7.1 and 7.2 (skip section 7.3)
Iterative Flattening in Cumulative Scheduling. Cumulative Scheduling Problem Set of Jobs Each job consists of a sequence of activities Each activity has.
Scheduling Parallel Task
A processor is a person, machine, computer, or robot etc., which works on a task. To solve a scheduling problem typically the tasks are scheduled to minimize.
Introduction to Job Shop Scheduling Problem Qianjun Xu Oct. 30, 2001.
© J. Christopher Beck Lecture 5: Project Planning 2.
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 12: February 13, 2002 Scheduling Heuristics and Approximation.
Operational Research & ManagementOperations Scheduling Introduction Operations Scheduling 1.Setting up the Scheduling Problem 2.Single Machine Problems.
Spring 2015 Mathematics in Management Science Machine Scheduling Problem Statement of MSP Assumptions & Goals Priority Lists List Processing Algorithm.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
Spring 2015 Mathematics in Management Science Critical Path Scheduling Critical Paths & Times Backflow Algorithm Critical Times PL Critical Path Algorithm.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
© J. Christopher Beck Lecture 6: Time/Cost Trade-off in Project Planning.
Outline Introduction Minimizing the makespan Minimizing total flowtime
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
CSEP 521 Applied Algorithms Richard Anderson Winter 2013 Lecture 3.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
Instruction Scheduling Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
Planning and Scheduling.  A job can be made up of a number of smaller tasks that can be completed by a number of different “processors.”  The processors.
1 Job Shop Scheduling. 2 Job shop environment: m machines, n jobs objective function Each job follows a predetermined route Routes are not necessarily.
Single Machine Scheduling Problem Lesson 5. Maximum Lateness and Related Criteria Problem 1|r j |L max is NP-hard.
Basic Project Scheduling
Lecture 8: Dispatch Rules
Basic Project Scheduling
CSCI 5582 Artificial Intelligence
Local Instruction Scheduling
Problem Solving and Searching
Problem Solving and Searching
Sungho Kang Yonsei University
Planning and Scheduling
Constraint satisfaction problems
Planning and Scheduling
Topic 15 Job Shop Scheduling.
CS 8520: Artificial Intelligence
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Constraint Satisfaction Problems
Constraint satisfaction problems
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Presentation transcript:

Evaluating Heuristics for the Fixed-Predecessor Subproblem of Pm | prec, p j = 1 | C max

Outline 1.Introduction 2.Approach 3.CP and LNS heuristics 4.HLF heuristics 5.Numerical results

Pm | prec, p j = 1 | C max Problem: find the makespan-minimizing schedule for a set of unit- length jobs with arbitrary precedence constraints Efficient algorithms exist for m = 2 Unknown complexity for fixed m >= 3

Question Can we discover anything by restricting to subproblems with more structured precedence constraints? Are any approaches we know optimal for these subproblems?

Motivation If a subproblem is found to be easy: More information about boundary between easy and hard problems Can easily schedule such instances in real world If a subproblem is found to be hard: General case is also hard, resolving open problem Easier to reason about a problem with more structure

Subproblems We saw two such subproblems in class: In-tree Out-tree

Heuristics Critical path (CP), largest number of successors (LNS) optimal In-tree Out-tree

Heuristics Critical path: prioritize nodes at the head of the longest path of jobs that still need to run Largest number of successors: prioritize nodes which are a predecessor (direct or indirect) of the most nodes

Generalization Can we find other precedence structures for which these heuristics are optimal?

Generalization In-tree: each node has one successor Out-tree: each node has one predecessor Both are planar In-tree Out-tree

Generalization Generalize out-tree: allow arbitrary number K of predecessors per node Pictured: K = 3

Method Generate many instance with the K-predecessor structure Solve each instance with each algorithm, several times

Method Generate many instance with the K-predecessor structure Solve each instance with each algorithm, several times If algorithm performs worse than another algorithm on an instance, cannot be optimal If algorithm’s schedule differs in makespan across trials, algorithm cannot be optimal

Instance Generation Add K root nodes to the graph Iteratively add nodes, randomly choosing K predecessors each time Any valid instance has a chance to be generated using this algorithm

Results: CP and LNS Neither CP nor LNS is optimal for the K-predecessor problem On some instances, both result in inconsistent C max

Results: CP Both algorithms fail on this graph. Here are the CP schedules: Time1234 Machine Machine Machine Time12345 Machine Machine Machine

Results: LNS Both algorithms fail on this graph. Here are the LNS schedules: Time1234 Machine Machine Machine Time12345 Machine Machine Machine

Results: Planar Graphs Even if we restrict to planar 2-predecessor graphs, there are instances for which both CP and LNS have inconsistent C max Time12345 Machine Machine Machine Time Machine Machine Machine Each schedule satisfies both the CP and LNS heuristics

Other Heuristics? P2 | prec, p j = 1 | C max has been solved efficiently All highest level first (HLF) schedules are optimal Proof, almost-linear algorithm given by [Gabow 1982] Level of a node: length of critical path starting at that node

HLF Heuristic A restriction of CP schedules 1.Process nodes from highest to lowest level 2.When finishing a level, if a machine remains available, “jump” a runnable job from the highest possible level 3.If there are multiple candidates, choose the one that allows future jumps to be to as high level as possible

Jumps With 2 machines, 16 is processed and a machine is free Node 15 or 10 can be “jumped” to run on the free machine Choose 15, since higher level Image: [Gabow 1982]

Generalizing to M >= 3 Not obvious how to generalize Gabow’s algorithm, since it assumes only one job is jumped each time For M = 3, can jump either 1 or 2 jobs each time Can still generate HLF schedules, though less efficiently, and observe performance Assumption: given a choice, we want to jump as many nodes as possible, minimizing idle time

HLF Self-Consistency Check if HLF schedules for the same instance agree on C max Generate 20,000,000 instances, each with 20 nodes, K = 3 Compare makespan of all HLF schedules

HLF Self-Consistency Result: no disagreements occurred Caveats: Does not mean no disagreement is possible Even if it is consistent, does not mean it is optimal

HLF Non-Optimal There are (very rare) instances where some CP schedules beat HLF

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: Highest level at the bottom. Level is in [brackets]. Green nodes are executed on their level; blue nodes are jumped. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal Time M M M Time M M M Note: It’s actually optimal to execute fewer nodes early on, so that the critical job #7 can finish earlier. CP Schedule: Superior HLF Schedule: Inferior

HLF Non-Optimal In the 2-processor case, you always jump 1 node With 3 processors, can choose between jumping 1 or 2 Sometimes, it’s better to jump 1 to allow a critical job to run earlier

Numerical Results Average makespan for each algorithm given M, N, K M: number of machines N: number of jobs K: branching factor MNKCPLNSHLFRANDOM

Numerical Analysis HLF is better than CP and LNS (by a tiny fraction) HLF implementation too inefficient to run on large graphs CP and LNS near identical, some divergence when K changed MNKCPLNSHLFRANDOM N/A N/A N/A N/A

Numerical Analysis Random performed admirably, but scaled less well as number of machines increased (though difference seemed to shrink with branching factor) MNKCPLNSHLFRANDOM N/A N/A N/A N/A

End