1 Midterm Review cmsc421 Fall 2005. 2 Outline Review the material covered by the midterm Questions?

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

An Introduction to Artificial Intelligence
Classic AI Search Problems
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Artificial Intelligence Problem Solving Eriq Muhammad Adams
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
Blind Search by Prof. Jean-Claude Latombe
Blind Search1 Solving problems by searching Chapter 3.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Search in AI.
Mahgul Gulzai Moomal Umer Rabail Hafeez
This time: Outline Game playing The minimax algorithm
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
CS 561, Sessions Administrativia Assignment 1 due tuesday 9/24/2002 BEFORE midnight Midterm exam 10/10/2002.
Midterm Review CMSC421 – Fall CH1 Summary: Intro AI Definitions: dimensions human/rational think/act Three Major Components of AI Algorithms Representation.
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Using Search in Problem Solving
CS 561, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
CS 460, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Heuristic (Informed) Search (Where we try to be smarter in how we choose among alternatives) R&N III: Chapter 3.5 R&N II: Chap. 4, Sect. 4.1–3.
HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Problem Solving and Search Andrea Danyluk September 11, 2013.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #3 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Solving Problems by Searching (Blindly) R&N: Chap. 3 (many of these slides borrowed from Stanford’s AI Class)
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
1 Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap. 4, Sect. 4.1–3.
1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Heuristic Search Foundations of Artificial Intelligence.
Knowledge Search CPTR 314.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
For Monday Read chapter 4 exercise 1 No homework.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3: Uninformed Search
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Artificial Intelligence Problem solving by searching CSC 361
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
EA C461 – Artificial Intelligence
Searching for Solutions
Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Presentation transcript:

1 Midterm Review cmsc421 Fall 2005

2 Outline Review the material covered by the midterm Questions?

3 Subjects covered so far… Search: Blind & Heuristic Constraint Satisfaction Adversarial Search Logic: Propositional and FOL

4 … and subjects to be covered planning uncertainty learning and a few more…

5 Search

6 Stating a Problem as a Search Problem  State space S  Successor function: x  S  SUCCESSORS (x)  2 S  Arc cost  Initial state s 0  Goal test: x  S  GOAL? (x) =T or F  A solution is a path joining the initial to a goal node S 1 3 2

7 Basic Search Concepts  Search tree  Search node  Node expansion  Fringe of search tree  Search strategy: At each stage it determines which node to expand

8 Search Algorithm 1.If GOAL?(initial-state) then return initial-state 2.INSERT(initial-node,FRINGE) 3.Repeat: a.If empty(FRINGE) then return failure b.n  REMOVE(FRINGE) c.s  STATE(n) d.For every state s’ in SUCCESSORS(s) i.Create a new node n’ as a child of n ii.If GOAL?(s’) then return path or goal state iii.INSERT(n’,FRINGE)

9 Performance Measures  Completeness A search algorithm is complete if it finds a solution whenever one exists [What about the case when no solution exists?]  Optimality A search algorithm is optimal if it returns an optimal solution whenever a solution exists  Complexity It measures the time and amount of memory required by the algorithm

10 Blind vs. Heuristic Strategies  Blind (or un-informed) strategies do not exploit state descriptions to select which node to expand next  Heuristic (or informed) strategies exploits state descriptions to select the “most promising” node to expand

11 Blind Strategies  Breadth-first Bidirectional  Depth-first Depth-limited Iterative deepening  Uniform-Cost (variant of breadth-first) Arc cost = c(action)    0

12 Comparison of Strategies  Breadth-first is complete and optimal, but has high space complexity  Depth-first is space efficient, but is neither complete, nor optimal  Iterative deepening is complete and optimal, with the same space complexity as depth-first and almost the same time complexity as breadth-first

13 Avoiding Revisited States  Requires comparing state descriptions  Breadth-first search: Store all states associated with generated nodes in CLOSED If the state of a new node is in CLOSED, then discard the node

14 Avoiding Revisited States  Depth-first search: Solution 1: –Store all states associated with nodes in current path in CLOSED –If the state of a new node is in CLOSED, then discard the node  Only avoids loops Solution 2: –Store of all generated states in CLOSED –If the state of a new node is in CLOSED, then discard the node  Same space complexity as breadth-first !

15 Uniform-Cost Search (Optimal)  Each arc has some cost c   > 0  The cost of the path to each fringe node N is g(N) =  costs of arcs  The goal is to generate a solution path of minimal cost  The queue FRINGE is sorted in increasing cost  Need to modify search algorithm S 0 1 A 5 B 15 C SG A B C G 11 G 10

16 Modified Search Algorithm 1.INSERT(initial-node,FRINGE) 2.Repeat: a.If empty(FRINGE) then return failure b.n  REMOVE(FRINGE) c.s  STATE(n) d.If GOAL?(s) then return path or goal state e.For every state s’ in SUCCESSORS(s) i.Create a node n’ as a successor of n ii.INSERT(n’,FRINGE)

17 Avoiding Revisited States in Uniform-Cost Search  When a node N is expanded the path to N is also the best path from the initial state to STATE(N) if it is the first time STATE(N) is encountered.  So: When a node is expanded, store its state into CLOSED When a new node N is generated: –If STATE(N) is in CLOSED, discard N –If there exits a node N’ in the fringe such that STATE(N’) = STATE(N), discard the node – N or N’ – with the highest-cost path

18 Best-First Search  It exploits state description to estimate how promising each search node is  An evaluation function f maps each search node N to positive real number f(N)  Traditionally, the smaller f(N), the more promising N  Best-first search sorts the fringe in increasing f

19  The heuristic function h(N) estimates the distance of STATE (N) to a goal state Its value is independent of the current search tree; it depends only on STATE(N) and the goal test  Example:  h 1 (N) = number of misplaced tiles = 6 Heuristic Function STATE (N) Goal state

20 Classical Evaluation Functions  h(N): heuristic function [Independent of search tree]  g(N): cost of the best path found so far between the initial node and N [Dependent on search tree]  f(N) = h(N)  greedy best-first search  f(N) = g(N) + h(N)

21 Can we Prove Anything?  If the state space is finite and we discard nodes that revisit states, the search is complete, but in general is not optimal  If the state space is finite and we do not discard nodes that revisit states, in general the search is not complete  If the state space is infinite, in general the search is not complete

22 Admissible Heuristic  Let h*(N) be the cost of the optimal path from N to a goal node  The heuristic function h(N) is admissible if: 0  h(N)  h*(N)  An admissible heuristic function is always optimistic ! Note: G is a goal node  h(G) = 0

23 A* Search (most popular algorithm in AI)  f(N) = g(N) + h(N), where: g(N) = cost of best path found so far to N h(N) = admissible heuristic function  for all arcs: 0 <   c (N,N’)  “modified” search algorithm is used  Best-first search is then called A* search

24 Result #1 A* is complete and optimal

25 Experimental Results  8-puzzle with:  h 1 = number of misplaced tiles  h 2 = sum of distances of tiles to their goal positions  Random generation of many problem instances  Average effective branching factors (number of expanded nodes): dIDSA1*A1*A2*A2* (3,644,035)1.42 (227)1.24 (73) (39,135)1.26 (1,641)

26 Iterative Deepening A* (IDA*)  Idea: Reduce memory requirement of A* by applying cutoff on values of f  Consistent heuristic h  Algorithm IDA*: 1.Initialize cutoff to f(initial-node) 2.Repeat: a.Perform depth-first search by expanding all nodes N such that f(N)  cutoff b.Reset cutoff to smallest value f of non- expanded (leaf) nodes

27 Local Search  Light-memory search method  No search tree; only the current state is represented!  Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state  Many similarities with optimization techniques

28 Search problems Blind search Heuristic search: best-first and A* Construction of heuristicsLocal search Variants of A*

29 When to Use Search Techniques? 1)The search space is small, and No other technique is available Developing a more efficient technique is not worth the effort 2)The search space is large, and No other technique is available, and There exist “good” heuristics

30 Constraint Satisfaction

31 Constraint Satisfaction Problem Set of variables {X 1, X 2, …, X n } Each variable X i has a domain D i of possible values Usually D i is discrete and finite Set of constraints {C 1, C 2, …, C p } Each constraint C k involves a subset of variables and specifies the allowable combinations of values of these variables Goal: Assign a value to every variable such that all constraints are satisfied

32 CSP as a Search Problem Initial state: empty assignment Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables Goal test: the assignment is complete Path cost: irrelevant

33 Questions 1.Which variable X should be assigned a value next? 1.Minimum Remaining Values/Most-constrained variable 2.In which order should its domain D be sorted? 1.least constrained value 3.How should constraints be propagated? 1.forward checking 2.arc consistency

34 Adversarial Search

35 Specific Setting Two-player, turn-taking, deterministic, fully observable, zero-sum, time-constrained game  State space  Initial state  Successor function: it tells which actions can be executed in each state and gives the successor state for each action  MAX’s and MIN’s actions alternate, with MAX playing first in the initial state  Terminal test: it tells if a state is terminal and, if yes, if it’s a win or a loss for MAX, or a draw  All states are fully observable

36 Choosing an Action: Basic Idea 1)Using the current state as the initial state, build the game tree uniformly to the maximal depth h (called horizon) feasible within the time limit 2)Evaluate the states of the leaf nodes 3)Back up the results from the leaves to the root and pick the best action assuming the worst from MIN  Minimax algorithm

37 Minimax Algorithm 1.Expand the game tree uniformly from the current state (where it is MAX’s turn to play) to depth h 2.Compute the evaluation function at every leaf of the tree 3.Back-up the values from the leaves to the root of the tree as follows: a.A MAX node gets the maximum of the evaluation of its successors b.A MIN node gets the minimum of the evaluation of its successors 4.Select the move toward a MIN node that has the largest backed-up value

38 Alpha-Beta Pruning  Explore the game tree to depth h in depth-first manner  Back up alpha and beta values whenever possible  Prune branches that can’t lead to changing the final decision

39 Example The beta value of a MIN node is an upper bound on the final backed-up value. It can never increase 1  = 1 2

40 Example  = 1 The alpha value of a MAX node is a lower bound on the final backed-up value. It can never decrease 1  = 1 2

41 Alpha-Beta Algorithm  Update the alpha/beta value of the parent of a node N when the search below N has been completed or discontinued  Discontinue the search below a MAX node N if its alpha value is  the beta value of a MIN ancestor of N  Discontinue the search below a MIN node N if its beta value is  the alpha value of a MAX ancestor of N

42 Logical Representations and Theorem Proving

43 Logical Representations Propositional logic First-order logic syntax and semantics models, entailment, etc.

44 The Game Rules: 1.Red goes first 2.On their turn, a player must move their piece 3.They must move to a neighboring square, or if their opponent is adjacent to them, with a blank on the far side, they can hop over them 4.The player that makes it to the far side first wins.

45 Logical Inference Propositional: truth tables or resolution FOL: resolution + unification strategies: –shortest clause first –set of support

46 Questions?