Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

Algorithm Design Methods (I) Fall 2003 CSE, POSTECH.
G5BAIM Artificial Intelligence Methods
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
“Using Weighted MAX-SAT Engines to Solve MPE” -- by James D. Park Shuo (Olivia) Yang.
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
02/01/11CMPUT 671 Lecture 11 CMPUT 671 Hard Problems Winter 2002 Joseph Culberson Home Page.
Heuristic Optimization Athens 2004 Department of Architecture and Technology Universidad Politécnica de Madrid Víctor Robles
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
The Theory of NP-Completeness
NP-Complete Problems Problems in Computer Science are classified into
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
Chapter 11: Limitations of Algorithmic Power
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Randomized Algorithm. NP-Complete Problem  A problem that, right now, we need exhaustive search  Example:  SAT  TSP  Vertex Cover  Etc.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
1 IE 607 Heuristic Optimization Simulated Annealing.
CSCE350 Algorithms and Data Structure
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Local Search CPSC 322 – CSP 5 Textbook §4.8 February 7, 2011.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Stocs – A Stochastic CSP Solver Bella Dubrov IBM Haifa Research Lab © Copyright IBM.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Course: Logic Programming and Constraints
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
LOG740 Heuristic Optimization Methods Local Search / Metaheuristics.
Introduction to Optimization
Design and Analysis of Algorithms - Chapter 101 Our old list of problems b Sorting b Searching b Shortest paths in a graph b Minimum spanning tree b Primality.
Review of Propositional Logic Syntax
Optimization Problems
Iterative Improvement Search Including Hill Climbing, Simulated Annealing, WALKsat and more....
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
1 CSE 326: Data Structures: Graphs Lecture 24: Friday, March 7 th, 2003.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
The Theory of NP-Completeness
Chapter 10 NP-Complete Problems.
Department of Computer Science
By Rohit Ray ESE 251 Simulated Annealing.
Computer Science cpsc322, Lecture 14
Chapter 11 Limitations of Algorithm Power
NP-Complete Problems.
Artificial Intelligence
CSC 380: Design and Analysis of Algorithms
Our old list of problems
Presentation transcript:

Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology

Overview Combinatorial Optimization Problems and State Spaces STAGE Algorithm Local Search Algorithms Results Conclusion and Future works

Optimization Problems Objective function: F(x 1, x 2, …, x n ) Find vector X=(x 1, x 2, …, x n ) which minimizes (maximizes) F Constraints: g 1 (X)  0 g 2 (X)  0. g m (X)  0

Combinatorial Optimization Problems (COP) Special kind of Optimization Problems which are Discrete Most of the COPs are NP-Hard, I.e. there is not any polynomial time algorithm for solving them.

Satisfiability SAT: Given a formula in propositional calculus, is there an assignment to its variables making it true? f(x 1, x 2,.., x n ) Problem is NP-Complete. (Cook 1971)

Bin Packing Problem (BPP) Given a list (a 1, a 2, …) of items, each of which has a size s(a i )>0, and a bin Capacity C, what is the minimum number of bins for packing items? Problem is NP-Complete (Garey and Johnson 1979)

An Example of BPP a 1 a 2 a 3 a 4 b 1 b 2 b 3 b 4 Objects list: a 1, a 2, …, a n Bin’s capacity (b j ) is C Objective function: m  a i < C, a i  b j, 1  j  m

Definition of State in BPP A particular permutation of items in the object list is called state. b 1 b 2 b 3 b 4 a 1 a 2 a 3 a 4 Greedy Algorithm

State Space of BPP a 1, a 2, a 3, a 4 a 2, a 4, a 3, a 1 a 1, a 4, a 2, a 3... a 1, a 2, a 4, a 3...

A Local Search Algorithm 1) s 0 : a random start state 2) for i = 0 to +  - generate new solutions set S from the current - generate new solutions set S from the current solution s i solution s i - decide whether s i+1 = s’  S or s i - decide whether s i+1 = s’  S or s i - if a stopping condition is satisfied - if a stopping condition is satisfied return the best solution found return the best solution found

Local Optimum Solutions The quality of a local optimum resulted from a local search process depends on a starting state.

Multi-Start LSA Runs the base local search algorithms from different starting states and returns the best result found. Is it possible to choose a promising new starting state?

Other Features of a State Other features of a state can help the search process. (Boyan 1998)

Previous Experiences There is a relationship among local optima of a COP, so previously found local optima can help to locate more promising start states.

Core ideas Using an Evaluation Function to predict the eventual outcome of doing a local search from a state. The EF is a function of some features of a state. The EF is retrained gradually.

STAGE Algorithm Uses an Evaluation Function to locate a good start state. Does local search. Retrains EF with the new generated search trajectory Learning Phase Execution Phase

Evaluation Function StateFeaturesEFPrediction EF can be used by another local search algorithm for finding a good new starting point. Applying EF on a state

Diagram of STAGE (Boyan 98)

Analysis of STAGE What is the effect of using different local search algorithms? Local search algorithms: Best Improvement Hill Climbing (BIHC) First Improvement Hill Climbing (FIHC) Stochastic Hill Climbing (STHC)

Best Improvement HC Generates all of the neighboring states, and then selects the best one. …

First Improvement HC Generates neighboring states systematically, and then selects the first good one

Stochastic HC Stochastically generates some of the neighboring states, and then selects the best one. The size of the set containing neighbors is called PATIENCE.

Different LSAs Different LSAs for solving U250_00 instance

Different LSAs, bounded steps

Some Results The higher the accuracy in choosing the next state, the better the quality of the final solution, by comparing STHC1 and STHC2 (PATIENCE1=350, PATIENCE2=700) Deep paces result in higher quality and faster solutions, by comparing BIHC and others.

Different LSAs, bounded moves

Some Results It is better to search the solution space randomly rather than systematically, by comparing STHC and others.

Future works Using other learning structures in STAGE Verifying these results on another problem (for example Graph Coloring) Using other LSA, such as Simulated Annealing.

Questions