School of Computer Science & Engineering

Slides:



Advertisements
Similar presentations
G5BAIM Artificial Intelligence Methods
Advertisements

Traveling Salesperson Problem
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
MAE 552 – Heuristic Optimization Lecture 3 January 28, 2002.
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
Ant Colony Optimization: an introduction
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Search Methods An Annotated Overview Edward Tsang.
GRASP: A Sampling Meta-Heuristic
Local Search: walksat, ant colonies, and genetic algorithms.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Clase 3: Basic Concepts of Search. Problems: SAT, TSP. Tarea 1 Computación Evolutiva Gabriela Ochoa
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Iterative Improvement Search Including Hill Climbing, Simulated Annealing, WALKsat and more....
Local Search Algorithms and Optimization Problems
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Artificial Intelligence Solving problems by searching.
Dept. Computer Science, Korea Univ. Intelligent Information System Lab A I (Artificial Intelligence) Professor I. J. Chung.
Lecture 3: Uninformed Search
Optimization Problems
CSCI 4310 Lecture 10: Local Search Algorithms
SAT problem SAT – Boolean satisfiability problem
Department of Computer Science
BackTracking CS255.
Heuristic Optimization Methods
School of Computer Science & Engineering
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Introduction to Artificial Intelligence
Local Search Algorithms
Problem Solving by Searching
Artificial Intelligence (CS 370D)
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Comparing Genetic Algorithm and Guided Local Search Methods
Greedy Algorithms / Interval Scheduling Yin Tat Lee
Dr. Arslan Ornek IMPROVING SEARCH
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Computer Science cpsc322, Lecture 14
Hill-climbing Search Goal: Optimizing an objective function.
Optimization Problems
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Multi-Objective Optimization
School of Computer Science & Engineering
Lecture 9: Tabu Search © J. Christopher Beck 2005.
Branch and Bound Searching Strategies
Heuristics Local Search
Chapter 5. Advanced Search
Artificial Intelligence
Lecture 9 Administration Heuristic search, continued
Unit –VII Coping with limitations of algorithm power.
Local Search Algorithms
Search.
Search.
CSC 380: Design and Analysis of Algorithms
Lecture 4: Tree Search Strategies
Local Search Algorithms
Presentation transcript:

School of Computer Science & Engineering Artificial Intelligence Local Search: A TSP Solution Dae-Won Kim School of Computer Science & Engineering Chung-Ang University

Q: Why is the TSP important?

Why are we doing TSP-like Projects?

Answer: Real-life problems NP-Hard problems Why are the real-life problems difficult?

Search space Constraints: hard vs. soft Evaluation function Environment: noisy, time, …

What is the size of search space of the TSP with 20 cities?

Search space = n! / 2n = (n-1)! / 2

If 20 cities, 1016 possible solutions

What could be constraints for TSP?

e.g., must visit or cannot visit order

How to design an evaluation fn?

Given “15 – 3 – 11 – 19 – 17” Eval-fn = dist(15,3) + dist(3,11) + … + dist(19,17)

What could be environment noise or dynamic factors for TSP?

Problem? Model? Solution?

Problem  Model  Solution

Problem  ModelA  SolutionP

Problem  ModelP  SolutionA

Q: Which one of the two is better?

Problem  ModelP1  SolutionA1 … Problem  ModelPn  SolutionAn

Feasible Solution

Definition: a solution that satisfies the problem-specific constraints

Feasible space (F)  Search space (S)

F = S for TSP

The “search problem” and “optimization problem” are considered synonymous.

The search for the best feasible solution is the optimization problem

Problem for TSP

Given S, and F  S, find x  F such that eval(x)  eval(y) for all y  F.

The search itself knows nothing

Model for TSP

We need three factors for modeling

Representation Objective function Evaluation function

Representation Objective function Evaluation function permutation: determine a search space Objective function mathematical statement; min dist(x,y) Evaluation function map each tour to its corresponding distance

There are many classic algorithms that are designed to search spaces for an optimum solution.

They fall into two disjoint classes

Algorithms that require the evaluation of partially constructed solutions (exhaustive search) Algorithms that only evaluate complete solutions (local search)

We come up with some search terms: “Uninformed, Informed, Exhaustive, Local, Blind, Heuristic, Incremental”

Exhaustive Search Classical DFS, BFS Backtracking, branch and bound, A* …

We take advantage of an opportunity to organize the search and prune the number of alternative candidates that we need to examine

It is often called enumerative search

What could be its advantages?

It is simple. The only requirement is to generate every possible solution systematically. There are ways to reduce the amount of work you have to do.

What could be its disadvantages?

Some permutations might not be feasible unless the TSP is fully connected. Generating all possible permutations of cities are not practical. Try it for n > 100! A fast branch and bound is working??? A smart f = g + h is required.

How about greedy algorithms?

Attack a problem by constructing the complete solution in a series of steps.

Amazingly simple. Assign the values for all of the decision variables one by one and at every step make the best available decision. Does not always return the optimum solution.

Greedy Algorithm for TSP

The most intuitive greedy algorithm is based on the nearest-neighbor heuristic

Starting from a random city, proceed to the nearest unvisited city and continue until every city has been visited, at which point return to the first city

Dynamic Programming DP works on the principle of finding an overall solution by operating on an intermediate point that lies between where you are now and where you want to go. Computationally intensive All-pairs shortest path problem

Branch and Bound Idea: If we have a solution with a cost of c, we know that the next solution to try has a lower bound that is greater than c, and we minimizing, we don’t have to compute how bad it actually is. Q: Branch and Bound vs. A*

What is local search?

In many optimization problems, path (sequence info In many optimization problems, path (sequence info.) is irrelevant; the goal state itself is the solution

e.g., The 0/1 Knapsack Problem

State space = set of “complete” solutions Find optimal solution satisfying constraints Keep a single “current” solution, try to improve it (iterative improvement search)

What is the benefit? e.g., Hill-climbing, SA, GA, …

Local searches focus our attention within a local neighborhood of some particular solution.

Procedure of Local Search

1. Pick a solution from the search space and evaluate its merit 1. Pick a solution from the search space and evaluate its merit. Define this as the current solution. 2. Apply a transformation to the current solution to generate a new solution and evaluate its merit. 3. If the new solution is better than the current solution, then exchange it with the current solution; otherwise discard the new solution. 4. Repeat steps 2 and 3 until no transformation in the given set improves the current solution.

What is key issues of local search?

Type of transformation applied to the current solution

Type of transformation: TSP

Start with any complete tour, perform pair-wise exchanges Start with any complete tour, perform pair-wise exchanges. The simplest is called 2-opt (2-interchange)

The neighborhood is defined as the set of all tours that can be reached by changing two nonadjacent edges.

Your should design the best local transformation for TSP

Local search for N-Queens Problem

Local Search: Hill-Climbing

It is often called as gradient descent

We should accept local optima solution

What is local optima? Examples?

Like climbing Everest in thick fog with amnesia

How to avoid local optima?

Random-restart may works.