Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Advertisements

Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing Games require different search procedures. Basically they are based on generate and test philosophy. At one end, generator generates entire.
Adversarial Search Chapter 6 Section 1 – 4.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Search in AI.
Adversarial Search Chapter 5.
1 Game Playing. 2 Outline Perfect Play Resource Limits Alpha-Beta pruning Games of Chance.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Artificial Intelligence in Game Design
This time: Outline Game playing The minimax algorithm
Lecture 02 – Part C Game Playing: Adversarial Search
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Alpha-Beta Search. 2 Two-player games The object of a search is to find a path from the starting position to a goal position In a puzzle-type problem,
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Notes adapted from lecture notes for CMSC 421 by B.J. Dorr
Adversarial Search Chapter 5 Adapted from Tom Lenaerts’ lecture notes.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Search Strategies Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
AD FOR GAMES Lecture 4. M INIMAX AND A LPHA -B ETA R EDUCTION Borrows from Spring 2006 CS 440 Lecture Slides.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
Chapter 6 Adversarial Search. Adversarial Search Problem Initial State Initial State Successor Function Successor Function Terminal Test Terminal Test.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
For Wednesday Read Weiss, chapter 12, section 2 Homework: –Weiss, chapter 10, exercise 36 Program 5 due.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Game Playing: Adversarial Search chapter 5. Game Playing: Adversarial Search  Introduction  So far, in problem solving, single agent search  The machine.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
Adversarial Search and Game-Playing
Pengantar Kecerdasan Buatan
Based on slides by: Rob Powers Ian Gent
Adversarial Search CMPT 420 / CMPG 720.
Unit II Game Playing.
Presentation transcript:

Search Strategies

 Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text, databases, applications)  Min-max search – for optimal move search in games  Alpha-beta pruning – improvement of min- max search  Branch-and-bound – algorithmic technique to limit the search

 In chess, both players know where the pieces are, they alternate moves, and they are free to make any legal move. The object of the game is to checkmate the other player, to avoid being checkmated, or to achieve a draw if that's the best thing given the circumstances.  A chess program selects moves via use of a search function. A search function is a function that is passed information about the game, and tries to find the best move for side that the program is playing.

 Min-max search is used on a B-tree like structure representing the space of all possible moves. Initial state of every game is a large B-tree, then a tree-search finds the move that would be selected if both sides were to play the best possible moves.

 Let's say that at the root position (the position on the board now), it's White's turn to move. The Max function is called, and all of White's legal moves are generated. In each resulting position, the "Min" function is called. The "Min" function scores the position and returns a value. Since it is White to move, and White wants a more positive score if possible, the move with the largest score is selected as best, and the value of this move is returned.  The "Min" function works in reverse. The "Min" function is called when it's Black's turn to move, and black wants a more negative score, so the move with the most negative score is selected.  These functions are dual recursive, meaning that they call each other until the desired search depth is reached. When the functions "bottom out", they return the result of the "Evaluate" function.

 Alpha - Beta Pruning ◦ a technique that improves upon the minimax algorithm by ignoring branches on the game tree that do not contribute further to the outcome.

◦ The basic idea behind this modification to the minimax search algorithm is the following. During the process of searching for the next move, not every move (i.e. every node in the search tree) needs to considered in order to reach a correct decision. ◦ In other words, if the move being considered results in a worse outcome than our current best possible choice, then the first move that the opposition could make which is less then our best move will be the last move that we need to look at.

 function Max-Value(state, game, œ, ß) returns the mimimax value of state  inputs:  state, current state in the game game, game description œ, the best score for MAX along the path to state ß, the best score for MIN along the path to state  if CUTOFF-TEST(state) then return EVAL(state) for each s in SUCCESSORS(state) do  œ = ß then return ß  end return œ  function Min-Value(state, game, œ, ß) returns the mimimax value of state  if CUTOFF-TEST(state) then return EVAL(state) for each s in SUCCESSORS(state) do  ß <-- MIN(ß,MAX-VALUE(s,game,œ,ß)) if ß <= œ then return œ  end return ß

 With the previously listed assumptions taken into account the following gains\improvements can be calculated.  With Alpha-Beta Pruning the number of nodes on average that need to be examined is O(bd/2) as opposed to the Minimax algorithm which must examine 0(bd) nodes to find the best move. In the worst case Alpha-Beta will have to examine all nodes just as the original Minimax algorithm does. But assuming a best case result this means that the effective branching factor is b(1/2) instead of b.  For a chess game, which normally has a branching factor of 35, the branching factor will be reduced to 6! It means that a chess program running Alpha - Beta could look ahead twice as far in the same amount of time, improving the skill level of our chess program from a novice to an expert level player.

 Definition: An algorithmic technique to find the optimal solution by keeping the best solution found so far. If a partial solution cannot improve on the best, it is abandoned.

 For instance, suppose we want to find the shortest route from Zarahemla to Manti, and at some time the shortest route found until that time is 387 kilometers. Suppose we are to next consider routes through Cumeni. If the shortest distance from Zarahemla to Cumeni is 350 km and Cumeni is 46 km from Manti in a straight line, there is no reason to explore possible roads from Cumeni: they will be at least 396 km ( ), which is worse than the shortest known route. So we need not explore paths from Cumeni.

 The method can be implemented as a backtracking algorithm, which is a modified depth-first search, or using a priority queue ordering partial solutions by lower bounds. backtracking depth-first searchpriority queue

 Chess  Checkers  Tic-tac-toe  Other games