An eye for eye only ends up making the whole world blind. -Mohandas Karamchand Gandhi, born October 2 nd, 1869. Lecture of October 2 nd, 2001.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Von Neuman (Min-Max theorem) Claude Shannon (finite look-ahead) Chaturanga, India (~550AD) (Proto-Chess) John McCarthy (  pruning) Donald Knuth ( 
Game Playing (Tic-Tac-Toe), ANDOR graph By Chinmaya, Hanoosh,Rajkumar.
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Tic Tac Toe Architecture CSE 5290 – Artificial Intelligence 06/13/2011 Christopher Hepler.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Adversarial Search: Game Playing Reading: Chapter next time.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
10/19/2004TCSS435A Isabelle Bichindaritz1 Game and Tree Searching.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
MDPs as Utility-based problem solving agents
This time: Outline Game playing The minimax algorithm
1 Game Playing Chapter 6 Additional references for the slides: Luger’s AI book (2005). Robert Wilensky’s CS188 slides:
Game Playing CSC361 AI CSC361: Game Playing.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Game-Playing Read Chapter 6 Adversarial Search. Game Types Two-person games vs multi-person –chess vs monopoly Perfect Information vs Imperfect –checkers.
Max Min Max Min Starting node and labels Figure 4.20 (pp. 153) Alpha-Beta Prune.
11/19  Connection between MC/HMM and MDP/POMDP  Utility in terms of the value of the vantage point.
Game Tree Search based on Russ Greiner and Jean-Claude Latombe’s notes.
Adversarial Search and Game Playing Examples. Game Tree MAX’s play  MIN’s play  Terminal state (win for MAX)  Here, symmetries have been used to reduce.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Game-Playing Read Chapter 6 Adversarial Search. State-Space Model Modified States the same Operators depend on whose turn Goal: As before: win or win.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
MIU Mini-Max Graham Kendall Game Playing Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
1 Computer Group Engineering Department University of Science and Culture S. H. Davarpanah
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Problem Reduction Search: AND/OR Graphs & Game Trees Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Introduction to Game Theory Kamal Aboul-Hosn Cornell University Computers Playing Games.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Russell and Norvig: Chapter 6 CS121 – Winter 2003.
Adversarial Search In this lecture, we introduce a new search scenario: game playing 1.two players, 2.zero-sum game, (win-lose, lose-win, draw) 3.perfect.
Chapter 5 Adversarial Search. 5.1 Games Why Study Game Playing? Games allow us to experiment with easier versions of real-world situations Hostile agents.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
1 Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game.
Adversarial Search and Game-Playing
Iterative Deepening A*
PENGANTAR INTELIJENSIA BUATAN (64A614)
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Adversarial Search.
Game Playing.
Artificial Intelligence
Artificial Intelligence Chapter 12 Adversarial Search
Game playing.
Chapter 6 : Game Search 게임 탐색 (Adversarial Search)
Artificial Intelligence
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Adversarial Search and Game Playing
Based on slides by: Rob Powers Ian Gent
Artificial Intelligence
Adversarial Search and Game Playing Examples
Presentation transcript:

An eye for eye only ends up making the whole world blind. -Mohandas Karamchand Gandhi, born October 2 nd, Lecture of October 2 nd, 2001

Sunday, May 11 th, 1997 What makes DeepBlue Tick?

Game Playing (Adversarial Search)

2 <= 2 Cut 14 <= 14 5 <= 5 2 <= 2 Whenever a node gets its “true” value, its parent’s bound gets updated When all children of a node have been evaluated (or a cut off occurs below that node), the current bound of that node is its true value Two types of cutoffs: If a min node n has bound =l, then cutoff occurs as long as l >=k If a max node n has bound >=k, and a min ancestor of n, say m, has a bound <=l, then cutoff occurs as long as l <=k

Von Neuman (Min-Max theorem) Claude Shannon (finite look-ahead) Chaturanga, India (~550AD) (Proto-Chess) John McCarthy (  pruning) Donald Knuth (  analysis) Lecture of 4 th October, 2001

Searching Tic Tac Toe using Minmax

Click for an animation of Alpha-beta search in action on Tic-Tac-Toen animation of Alpha-beta search in action on Tic-Tac-Toe

Evaluation Functions: TicTacToe If win for Max +infty If lose for Max -infty If draw for Max 0 Else # rows/cols/diags open for Max - #rows/cols/diags open for Min

Why is “deeper” better? Possible reasons –Taking mins/maxes of the evaluation values of the leaf nodes improves their collective accuracy –Going deeper makes the agent notice “traps” thus significantly improving the evaluation accuracy All evaluation functions first check for termination states before computing the non-terminal evaluation

RTA* Sn m k G S n m G=1 H=2 F=3 G=1 H=2 F=3 k G=2 H=3 F=5 infty --Grow the tree to depth d --Apply f-evaluation for the leaf nodes --propagate f-values up to the parent nodes f(parent) = min( f(children))

Multi-player Games