1 Chapter 5 Advanced Search. 2 l

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search Algorithms
Artificial Intelligence Presentation
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
Local Search Algorithms Chapter 4. Outline Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Ant Colony Optimization.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
Simulated Annealing 10/7/2005.
Ant Colony Optimization Optimisation Methods. Overview.
CSC344: AI for Games Lecture 4: Informed search
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Ant Colony Optimization: an introduction
Ant Colony Optimization (ACO): Applications to Scheduling
FORS 8450 Advanced Forest Planning Lecture 19 Ant Colony Optimization.
Copyright R. Weber Search in Problem Solving Search in Problem Solving INFO 629 Dr. R. Weber.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
Copyright R. Weber Search in Problem Solving ISYS 370 Dr. R. Weber.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
CS 484 – Artificial Intelligence1 Announcements Homework 2 due today Lab 1 due Thursday, 9/20 Homework 3 has been posted Autumn – Current Event Tuesday.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
Informed search algorithms
Informed search algorithms
Simulated Annealing.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Optimization Problems
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Discrete Optimization MA2827 Fondements de l’optimisation discrète Approximate methods: local search Material based.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Optimization Problems
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Introduction to Artificial Intelligence
Local Search Algorithms
Problem Solving by Searching
Computer Science cpsc322, Lecture 14
Heuristic search INT 404.
Optimization Problems
Chapter 5. Advanced Search
Artificial Intelligence
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

1 Chapter 5 Advanced Search

2 l

3 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization problems l Local search l Exchanging heuristics l Iterated local search

4 Chapter 5 Contents, continued l Simulated annealing l Genetic algorithms l Real time A* l Iterative deepening A* l Parallel search l Bidirectional search l Nondeterministic search l Nonchronological backtracking

5 Constraint Satisfaction Problems l Combinatorial optimization problems involve assigning values to a number of variables. l A constraint satisfaction problem is a combinatorial optimization problem with a set of constraints. It is also called CSP. l Can be solved using search. l With many variables it is essential to use heuristics.

6 Different Approaches l 1. An extremely simplistic approach to solving this kind of problem would be to analyze every possible configuration until one was found that matched the constraints. l 2. Use proper search plus heuristic.

7 Example l ore/queens/ ore/queens/ l simple.html simple.html

8 Backtracking Search l Backtracking is a process where steps are taken towards the final solution and the details are recorded. l If these steps do not lead to a solution some or all of them may have to be retraced and the relevant details discarded. l In theses circumstances it is often necessary to search through a large number of possible situations in search of feasible solutions.

9 Heuristic Repair l A heuristic method for solving constraint satisfaction problems. l Generate a possible solution, and then make small changes to bring it closer to satisfying constraints.

10 The Eight Queens Problem l A constraint satisfaction problem: nPlace eight queens on a chess board so that no two queens are on the same row, column or diagonal. l Can be solved by search, but the search tree is large. l Heuristic repair is very efficient at solving this problem.

11 Heuristic Repair for The Eight Queens Problem l Initial state – one queen is conflicting with another. l We’ll now move that queen to the square with the fewest conflicts.

12 Heuristic Repair for The Eight Queens Problem l Second state – now the queen on the f column is conflicting, so we’ll move it to the square with fewest conflicts.

13 Heuristic Repair for The Eight Queens Problem l Final state – a solution!

14 Local Search l Like heuristic repair, local search methods start from a random state, and make small changes until a goal state is achieved. l Local search methods are known as metaheuristics. l Most local search methods are susceptible to local maxima, like hill-climbing.

15 Tabu search l Tabu search is a metaheuristic that uses a list of state that have already been visited to attempt to avoid repeating paths – even going down hill to search a path seemingly a very bad choice. l Therefore, it might be able to avoid local maxima.

16 Example l 1. In hill-climbing l 2. Dr. Edward de Bono, founder of the Cognitive Research Trust in Cambridge. l When the following two pieces of plastic are given to someone with the instructions that they be arranged in a shape that can easily be described, the pieces always get arranged as shown to give a rectangle.

17

18

19

20

21 l l ch/what_is_tabu_search.asp ch/what_is_tabu_search.asp

22 Exchanging Heuristics l A simple local search method. l Heuristic repair is an example of an exchanging heuristic. l Involves swapping two or more variables at each step until a solution is found. l A k-exchange involves swapping the values of k variables. l Can be used to solve the traveling salesman problem.

23 Iterated Local Search l A local search is applied repeatedly from different starting states. l Attempts to avoid finding local maxima. l Useful in cases where the search space is extremely large, and exhaustive search will not be possible.

24 Ant Colony Optimization - ACO l Foraging ants leave a trail of pheromones so that they can lead other ants to find the food that they have found. l 04/index.html 04/index.html

25 Simulated Annealing l 1. Annealing - To heat and then cool usually for softening and rendering less brittle, gradual cooling being required for some materials (as steel and glass) but not for others (as copper and brass) l 2. Simulated Annealing - Based on metropolis Monte Carlo Simulation. l Aims at obtaining a minimum value for some function of a large number of variables. nThis value is known as the energy of the system.

26 Simulated Annealing (2) l A random start state is selected l A small random change is made. nIf this change lowers the system energy, it is accepted. nIf it increases the energy, it may be accepted, depending on a probability called the Boltzmann acceptance criteria: –e (-dE/T)

27 Simulated Annealing (3) –e (-dE/T) l T is the temperature of the system, and dE is the change in energy. l When the process starts, T is high, meaning increases in energy are relatively likely to happen. l Over successive iterations, T lowers and increases in energy become less likely.

28 Simulated Annealing (4) l Because the energy of the system is allowed to increase, simulated annealing is able to escape from global minima. l Simulated annealing is a widely used local search method for solving problems with very large numbers of variables. l For example: scheduling problems, traveling salesman, placing VLSI (chip) components.

29 Genetic Algorithms l A method based on biological evolution. l Create chromosomes which represent possible solutions to a problem. l The best chromosomes in each generation are bred with each other to produce a new generation. l Much more detail on this later.

30 Iterative Deepening A* l A* is applied iteratively, with incrementally increasing limits on f(n). l Works well if there are only a few possible values for f(n). l The method is complete, and has a low memory requirement, like depth- first search.

31 Parallel Search l Some search methods can be easily split into tasks which can be solved in parallel. l Important concepts to consider are: nTask distribution nLoad balancing nTree ordering

32 Bidirectional Search l Also known as wave search. l Useful when the start and goal are both known. l Starts two parallel searches – one from the root node and the other from the goal node. l Paths are expanded in a breadth-first fashion from both points. l Where the paths first meet, a complete and optimal path has been formed.

33 Nondeterministic Search l Useful when very little is known about the search space. l Combines the depth first and breadth first approaches randomly. l Avoids the problems of both, but does not necessarily have the advantages of either. l New paths are added to the queue in random positions, meaning the method will follow a random route through the tree until a solution is found.

34 Nonchronological backtracking l Depth first search uses chronological backtracking. nDoes not use any additional information to make the backtracking more efficient. l Nonchronological backtracking involves going back to forks in the tree that are more likely to offer a successful solution, rather than simply going back to the next unexplored path.