CSE473 Winter 1998 1 02/04/98 State-Space Search Administrative –Next topic: Planning. Reading, Chapter 7, skip 7.3 through 7.5 –Office hours/review after.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
This lecture topic: Game-Playing & Adversarial Search
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
1 CSC 550: Introduction to Artificial Intelligence Fall 2008 search in game playing  zero-sum games  game trees, minimax principle  alpha-beta pruning.
Adversarial Search Chapter 6 Section 1 – 4.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Adversarial Search: Game Playing Reading: Chapter next time.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
All rights reservedL. Manevitz Lecture 31 Artificial Intelligence A/O* and Minimax L. Manevitz.
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
CS 561, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Game Playing CSC361 AI CSC361: Game Playing.
Games and adversarial search
CS 460, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Game Tree Search based on Russ Greiner and Jean-Claude Latombe’s notes.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Alpha-Beta Search. 2 Two-player games The object of a search is to find a path from the starting position to a goal position In a puzzle-type problem,
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
ICS-270a:Notes 5: 1 Notes 5: Game-Playing ICS 270a Winter 2003.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Minimax.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Chapter 12 Adversarial Search. (c) 2000, 2001 SNU CSE Biointelligence Lab2 Two-Agent Games (1) Idealized Setting  The actions of the agents are interleaved.
For Wednesday Read Weiss, chapter 12, section 2 Homework: –Weiss, chapter 10, exercise 36 Program 5 due.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
For Wednesday Read chapter 7, sections 1-4 Homework: –Chapter 6, exercise 1.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search. Game playing u Multi-agent competitive environment u The most common games are deterministic, turn- taking, two-player, zero-sum game.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
Adversarial Search Chapter Two-Agent Games (1) Idealized Setting – The actions of the agents are interleaved. Example – Grid-Space World – Two.
Games & Adversarial Search
Chapter 6 : Game Search 게임 탐색 (Adversarial Search)
Alpha-Beta Search.
Games & Adversarial Search
Alpha-Beta Search.
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Alpha-Beta Search.
Game Playing Fifth Lecture 2019/4/11.
Alpha-Beta Search.
Games & Adversarial Search
Alpha-Beta Search.
Games & Adversarial Search
Presentation transcript:

CSE473 Winter /04/98 State-Space Search Administrative –Next topic: Planning. Reading, Chapter 7, skip 7.3 through 7.5 –Office hours/review after class today, Thursday 2:30 Last time –informed search, satisficing and optimizing (A*) This time –adversarial (game-tree) search –introduction to Planning

CSE473 Winter Search in Adversarial Games Non-adversarial game: you make a sequence of moves, and at the end you get a payoff depending on the state you are in –games of perfect information: deterministic moves (FreeCell) –games against nature: you make a move, “nature” changes the world same as perfect information if nature is perfectly predictable, but more generally probabilistic (stochastic next state generator) but, we assume that nature is dispassionate: her choice of move is not meant to minimize your payoff –adversarial games: you make a move, then an opponent makes a move, then both get a payoff (possibly negative) both you and opponent are attempting to maximize an individual payoff function often maximizing one means minimizing the other –zero-sum game perfect information: everybody knows all payoff functions

CSE473 Winter Example: The Game of Chicken You Him What is your optimal strategy if: actions are chosen simultaneously you get to choose first

CSE473 Winter General Approach to Game Playing by Search Expand the tree some fixed number of moves Apply a heuristic evaluation function to the (incomplete) state Apply MINIMAX to compute the best first move Example: TIC-TAC-TOE –players are MAX (drawing X’s) and MIN (drawing O’s) –e(p) is  if p is a win for MAX -  if p is a win for MIN (number of available rows/columns/diagonals for MAX) - (number of available rows/columns/diagonals) for MIN)

CSE473 Winter MINIMAX search, cutoff depth = 2 X XX XXXXX XX XXXXX O OOO O O O OOO OO 6-5=1 5-5= MAX MIN MAX

CSE473 Winter Early Pruning: The ALPHA-BETA Procedure The previous algorithm (implicitly) –generate the tree –evaluate the leaves –backup to generate the optimal first action Interleaving evaluation with generation means that some paths Cache partial evaluation information at each node –A MAX node has an  value which is the best (greatest) choice so far. It can never decrease. –A MIN node has a  value which is the best (least) choice so far. It can never increase.

CSE473 Winter Cached Values MAX MIN MAX  =10  =10  =4  =4 MIN MAX  =-1  =-1  =3  =3

CSE473 Winter Two sorts of pruning Search can be discontinued below any MIN node having a  value less than or equal to the  value of any of its MAX node ancestors. Search can be discontinued below any MAX node having an  value greater than or equal to the  value of any of its MIN node ancestors This can have an order-of-magnitude impact on the search –provided you choose the first alternative(s) well!

CSE473 Winter State-Space Search: Summary A very abstract characterization of problem solving –non-deterministic graph search An interesting split between domain-dependent and domain- independent aspects of the process –the domain-independent part can be a library Extensions to optimizing, adversarial search, continuous spaces Disadvantages –the “direction” of the search may be wrong (progression versus regression) –the domain-independent components are “black boxes” perhaps state generation, goal recognition could be further automated

CSE473 Winter Planning: The “Neutral” Problem Description Inputs –a set of states S = {s 1, s 2,..., s n } –a set of actions A={a 1, a 2,..., a m } each action is a partial function a i : S  S –a unique initial state s i –a goal region G  S Output –a sequence of actions such that b k (... b 3 (b 2 (b 1 (s i ))...)  G

CSE473 Winter Planning as Search Search: can easily implement a planner using the standard search code/algorithms But we would like to –have a declarative representation for states and actions ease in specification (move generator, goal checker) could support explanation and learning tasks –exploit the goal better using a regression algorithm we believe fan-out is worse than fan-in –further exploit the nature of the goal goal is a conjunction of subgoals common solution technique is “divide and conquer” –to solve G = G 1 ^G 2 ^..., solve the G i subgoals separately, and merge the solutions

CSE473 Winter Planning States and Operators Example: –goal is to be at B and fuel tank full –truck is currently at A and fuel tank half –A and B are connected –you can only refuel at B State: –S0 = { at(TRUCK, A), fuel(HALF), connected(A,B), refuel-at(B) } –everything is false unless explicitly stated true

CSE473 Winter States versus State Descriptions A state is a set of formulas that describes a single state of the world –by convention, we include only positive formulas and assume everything else is false We also need to represent sets of states –the goal is to be at B and have half a tank of gas, which describes a set of states –there might be other formulas that describe the world, but we don’t care what state they are A state description is a set of formulas that describes a set of states –both positive and negative formulas are allowed in the set –any formula not mentioned is a “don’t care”