Introduction to Optimization

Slides:



Advertisements
Similar presentations
G5BAIM Artificial Intelligence Methods
Advertisements

Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
Reducibility Class of problems A can be reduced to the class of problems B Take any instance of problem A Show how you can construct an instance of problem.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Prologue Chin-Shiuh Shieh.
The Theory of NP-Completeness
NP-Complete Problems Problems in Computer Science are classified into
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Analysis of Algorithms CS 477/677
Chapter 11: Limitations of Algorithmic Power
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
NP-complete and NP-hard problems. Decision problems vs. optimization problems The problems we are trying to solve are basically of two kinds. In decision.
1 IE 607 Heuristic Optimization Introduction to Optimization.
Ant Colony Optimization: an introduction
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Lecture: 5 Optimization Methods & Heuristic Strategies Ajmal Muhammad, Robert Forchheimer Information Coding Group ISY Department.
Metaheuristics Meta- Greek word for upper level methods
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Meta-Heuristic Optimization Methods
Chapter 11 Limitations of Algorithm Power. Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples:
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
Lecture 30. P, NP and NP Complete Problems 1. Recap Data compression is a technique to compress the data represented either in text, audio or image form.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Search Methods An Annotated Overview Edward Tsang.
Design & Analysis of Algorithms Combinatory optimization SCHOOL OF COMPUTING Pasi Fränti
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
CSC 413/513: Intro to Algorithms NP Completeness.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
P, NP, and Exponential Problems Should have had all this in CS 252 – Quick review Many problems have an exponential number of possibilities and we can.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Exact and heuristics algorithms
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Unit 9: Coping with NP-Completeness
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
LOG740 Heuristic Optimization Methods Local Search / Metaheuristics.
Clase 3: Basic Concepts of Search. Problems: SAT, TSP. Tarea 1 Computación Evolutiva Gabriela Ochoa
Optimization Problems
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Introduction to Optimization 1. Optimization … is a branch of mathematics and computational science that studies methods and techniques specially designed.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Optimization Problems
School of Computer Science & Engineering
metaheuristic methods and their applications
CSCI 4310 Lecture 10: Local Search Algorithms
Heuristic Optimization Methods
School of Computer Science & Engineering
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Advanced Artificial Intelligence Evolutionary Search Algorithm
metaheuristic methods and their applications
ICS 353: Design and Analysis of Algorithms
Optimization Problems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Chapter 11 Limitations of Algorithm Power
Design & Analysis of Algorithms Combinatorial optimization
Algorithms CSCI 235, Spring 2019 Lecture 36 P vs
Presentation transcript:

Introduction to Optimization

Ingredients of Optimization Objective function Decision variables Constraints Find values of the decision variables that minimize or maximize the objective function while satisfying the constraints

Ingredients of Optimization Objective Function Max (Min) some function of decision variables Subject to (s.t.) equality (=) constraints inequality ( ) constraints Search Space Range or values of decision variables that will be searched during optimization. Often a calculable size in combinatorial optimization

Constraints Constraints can be hard (must be satisfied) or soft (is desirable to satisfy). Example: In your course schedule a hard constraint is that no classes overlap. A soft constraint is that no class be before 10 AM. Constraints can be explicit (stated in the problem) or implicit (obvious to the problem).

Example TSP (Traveling Salesman Problem) Given the coordinates of n cities, find the shortest closed tour which visits each city once and only once (i.e. exactly once). In the TSP, an implicit constraint is that all cities be visited once and only once.

Aspects of an Optimization Problem Continuous or Combinatorial Size – number of decision variables, range/count of possible values of decision variables, search space size Degree of constraints – number of constraints, difficulty of satisfying constraints, proportion of feasible solutions to search space Single or Multiple objectives Deterministic (all variables are deterministic) or Stochastic (the objective function and/or some decision variables and/or some constraints are random variables)

Types of Solutions A solution to an optimization problem specifies the values of the decision variables, and therefore also the value of the objective function. A feasible solution satisfies all constraints. An optimal solution is feasible and provides the best objective function value. A near-optimal solution is feasible and provides a superior objective function value, but not necessarily the best.

Continuous vs Combinatorial Optimization Optimization problems can be continuous (an infinite number of feasible solutions) or combinatorial (a finite number of feasible solutions). Continuous problems generally maximize or minimize a function of continuous variables such as min {4x+5y}where x and y are real numbers Combinatorial problems generally maximize or minimize a function of discrete variables such as min {4x+5y} where x and y are countable items (e.g. integer only).

Definition of Combinatorial Optimization (CO) Combinatorial optimization is the mathematical study of finding an optimal arrangement, grouping, ordering, or selection of discrete objects usually finite in numbers. - Lawler, 1976

Search Basics Search is the term used for constructing/improving solutions to obtain the optimum or near-optimum. Solution Encoding (representing the solution) Neighborhood Nearby solutions (in the encoding or solution space) Move Transforming current solution to another (usually neighboring) solution Evaluation The solutions’ feasibility and objective function value

Search Basics Constructive search techniques work by constructing a solution step by step, evaluating that solution for (a) feasibility and (b) objective function. Improvement search techniques work by constructing a solution, moving to a neighboring solution, evaluating that solution for (a) feasibility and (b) objective function.

Search Basics Search techniques may be deterministic (always arrive at the same final solution through the same sequence of solutions, although they may depend on the initial solution). Examples are LP (simplex method), tabu search, simple heuristics like FIFO, LIFO, and greedy heuristics. Search techniques may be stochastic where the solutions considered and their order are different depending on random variables. Examples are simulated annealing, ant colony optimization and genetic algorithms.

Search Basics Search techniques may be local, that is, they find the nearest optimum which may not be the real optimum. Example: greedy heuristic (local optimizers). Search techniques may be global, that is, they find the true optimum even if it involves moving to worst solutions during search (non-greedy).

Exhaustive or Brute-Force Search Wikipedia says: In computer science, brute-force search or exhaustive search, is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.

How Can We Solve Problems? It can sometimes be advantageous to distinguish between three groups of methods for finding solutions to our abstract problems (Exact) Algorithms Approximation Algorithms Heuristic Algorithms

(Exact) Algorithms An algorithm is sometimes described as a set of instructions that will result in the solution to a problem when followed correctly Unless otherwise stated, an algorithm is assumed to give the optimal solution to an optimization problem That is, not just a good solution, but the best solution

Approximation Algorithms Approximation algorithms (as opposed to exact algorithms) do not guarantee to find the optimal solution However, there is a bound on the quality E.g., for a maximization problem, the algorithm can guarantee to find a solution whose value is at least half that of the optimal value

Heuristic From greek heuriskein (meaning ‘find’) Wikipedia says: (…) A heuristic is a technique designed to solve a problem that ignores whether the solution can be proven to be correct, but which usually produces a good solution (…). Heuristics are intended to gain computational performance or conceptual simplicity, potentially at the cost of accuracy or precision.

(Problem-Dependent) Heuristics Heuristic algorithms do not guarantee to find the optimal solution, however: Heuristic algorithms do not even necessarily have a bound on how bad they can perform That is, they can return a solution that is arbitrarily bad compared to the optimal solution However, in practice, heuristic algorithms (heuristics for short) have proven successful

(Problem-Dependent) Heuristics Heuristics are rules to search to find optimal or near-optimal solutions. Examples are FIFO, LIFO, earliest due date first, largest processing time first, shortest distance first, etc. Heuristics can be constructive (build a solution piece by piece) or improvement (take a solution and alter it to find a better solution).

(Problem-Dependent) Heuristics Many constructive heuristics are greedy that is, they take the best thing next without regard for the rest of the solution. Example: A constructive heuristic for TSP is to take the nearest city next. An improvement heuristic for TSP is to take a tour and swap the order of two cities.

(Problem-Independent) Metaheuristics Meta : in an upper level Heuristic : to find A metaheuristic is formally defined as an iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies are used to structure information in order to find efficiently near-optimal solutions. [Osman and Laporte 1996].

Fundamental Properties of Metaheuristics [Blum and Roli 2003] Metaheuristics are strategies that “guide” the search process. The goal is to efficiently explore the search space in order to find (near-) optimal solutions. Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes. Metaheuristic algorithms are approximate and usually non-deterministic.

Fundamental Properties of Metaheuristics They may incorporate mechanisms to avoid getting trapped in confined areas of the search space. The basic concepts of metaheuristics permit an abstract level description. Metaheuristics are not problem-specific. Metaheuristics may make use of domain-specific knowledge in the form of heuristics that are controlled by the upper level strategy. Today’s more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search.

Fundamental Properties of Metaheuristics Meta-heuristics are not tied to any special problem type and are general methods that can be altered to fit the specific problem. They work by guessing the right directions for finding the true or near optimal solution of complex problems so that the space searched, and thus the time required, can be significantly reduced. The inspiration from nature is: Simulated Annealing (SA) – molecule/crystal arrangement during cool down Evolutionary Computation (EC) – biological evolution Tabu Search (TS) – long and short term memory Ant Colony and Swarms - individual and group behavior using communication between agents

Advantages of Metaheuristics Very flexible Often global optimizers Often robust to problem size, problem instance and random variables May be only practical alternative

Disadvantages of Metaheuristics Often need problem specific information / techniques Optimality (convergence) may not be guaranteed Lack of theoretic basis Different searches may yield different solutions to the same problem (stochastic) Stopping criteria Multiple search parameters

Classification of Metaheuristics The most important way to classify metaheuristics population-based vs. single-solution-based (Blum and Roli, 2003) The single-solution-based algorithms work on a single solution Hill Climbing Simulated Annealing Tabu Search The population-based algorithms work on a population of solutions Genetic Algorithm Ant Colony Optimization Particle Swarm Optimization

Classification of Metaheuristics Single-solution-based Hill Climbing Simulated Annealing Tabu Search Population-based Genetic Algorithm Swarm Intelligence Ant Colony Optimization Particle Swarm Optimization

Polynomial vs Exponential-Time Algorithms What is a “good” algorithm? It is commonly accepted that the worst case performance bounded by a polynomial function of the problem parameters is “good”. We call this a Polynomial-Time Algorithm Example: Strongly preferred because it can handle arbitrarily large data

Polynomial vs Exponential-Time Algorithms In Exponential-Time Algorithms, worst case run time grows as a function that cannot be polynomially bounded by the input parameters. Example: Why is a polynomial-time algorithm better than an exponential-time one? Exponential time algorithms have an explosive growth rate

n=5 n=10 n=100 n=1000 n 5 10 100 1000 n2 25 100 10000 1000000 n3 125 1000 1000000 109 2n 32 1024 1.27 x 1030 1.07 x 10301 n! 120 3.6 x 106 9.33 x 10157 4.02 x 102567

An example : TSP (Traveling Salesman Problem) 1 2 3 5 4 23 10 7 15 20 6 18 A solution A sequence 12345 Tour length = 31 13452 Tour length = 63 There are (n-1)! tours in total

Optimization vs Decision Problems Optimization Problem A computational problem in which the object is to find the best of all possible solutions. (i.e. find a solution in the feasible region which has the minimum or maximum value of the objective function.) Decision Problem A problem with a “yes” or “no” answer.

Convert Optimization Problems into equivalent Decision Problems What is the optimal value? Is there a feasible solution to the problem with an objective function value equal to or superior to a specified threshold?

Class P The class of decision problems for which we can find a solution in polynomial time. i.e. P includes all decision problems for which there is an algorithm that halts with the correct yes/no answer in a number of steps bounded by a polynomial in the problem size n. The Class P in general is thought of as being composed of relatively “easy” problems for which efficient algorithms exist.

Class NP NP = Nondeterministic Polynomial NP is the class of decision problems for which we can check solutions in polynomial time. i.e. easy to verify but not necessarily easy to solve (multiple choice problem)

Class NP Formally, it is the set of decision problems such that if x is a “yes” instance then this could be verified in polynomial time if a clue or certificate whose size is polynomial in the size of x is appended to the problem input. NP includes all those decision problems that could be polynomial-time solved if the right (polynomial-length) clue is appended to the problem input.

Polynomial Reduction If problem P polynomially reduces to problem P´ and there is a polynomial-time algorithm for problem P´, then there is a polynomial-time algorithm for problem P. Problem P´ is at least as hard as problem P!

NP Hard vs NP Complete A problem P is said to be NP-Hard if all members of NP polynomially reduce to this problem. A problem P is said to be NP-Complete if (a) P ∈ NP, and (b) P is NP-Hard. NP NP-Hard P NP Complete

Famous NP-Hard Problems Traveling Salesman Problem (TSP) Vehicle Routing Problem Boolean Circuit Satisfiability Problem (SAT) Knapsack Problem Graph Coloring Problems Scheduling Problems Partitioning Problems Clustering Problems Placement Problems ….

Trajectory Methods Basic Local Search : Iterative Improvement Improve (N(S)) can be First improvement Best improvement Intermediate option

Trajectory Methods x3 X4 x2 x1 X5

Hill Climbing (Greedy Local Search) greedy algorithm based on heuristic adaptation of the objective function to explore a better landscape begin t  0 Randomly create a string vc Repeat evaluate vc select m new strings from the neighborhood of vc Let vn be the best of the m new strings If f(vc) < f(vn) then vc  vn t  t+1 Until t  N end

Hill Climbing (Greedy Local Search) Global optimum Local optimum Starting point Starting point search space

Hill Climbing (Greedy Local Search) Global optimum Local optimum Starting point Starting point search space

Escaping From Local Optima