Artificial Intelligence in Game Design Lecture 22: Heuristics and Other Ideas in Board Games.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 5. Game Playing
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Computers playing games. One-player games Puzzle: Place 8 queens on a chess board so that no two queens attack each other (i.e. on the same row, same.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Adversarial Search Chapter 5.
Adversarial Search: Game Playing Reading: Chapter next time.
Adversarial Search CSE 473 University of Washington.
Hoe schaakt een computer? Arnold Meijster. Why study games? Fun Historically major subject in AI Interesting subject of study because they are hard Games.
Adversarial Search Chapter 6.
Artificial Intelligence in Game Design Heuristics and Other Ideas in Board Games.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Artificial Intelligence in Game Design
Game Playing CSC361 AI CSC361: Game Playing.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
How to play Chess.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
PSU CS 370 – Introduction to Artificial Intelligence Game MinMax Alpha-Beta.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
CISC 235: Topic 6 Game Trees.
Lecture 5 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Adversarial Search CS30 David Kauchak Spring 2015 Some material borrowed from : Sara Owsley Sood and others.
Lecture 6: Game Playing Heshaam Faili University of Tehran Two-player games Minmax search algorithm Alpha-Beta pruning Games with chance.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Games as Game Theory Systems (Ch. 19). Game Theory It is not a theoretical approach to games Game theory is the mathematical study of decision making.
Game-playing AIs Part 1 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part I (this set of slides)  Motivation  Game Trees  Evaluation.
Instructor: Vincent Conitzer
How to Play Chess By: John. Dedication I dedicate this project to my family because we all love chess.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Games 1 Alpha-Beta Example [-∞, +∞] Range of possible values Do DF-search until first leaf.
CSC Intro. to Computing Lecture 22: Artificial Intelligence.
Senior Project Poster Day 2007, CIS Dept. University of Pennsylvania Reversi Meng Tran Faculty Advisor: Dr. Barry Silverman Strategies: l Corners t Corners.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Demonstration and Application Revision Version 2.0: The Return of the Revolutionaries of the Mountain of Slyzikarieth’s Death (Part XIV) Josh Waters Ty.
CHESS 2: Castling and Forking…and a Stalemate A Levoy Power Point.
Chess Strategies Component Skills Strategies Prototype Josh Waters, Ty Fenn, Tianyu Chen.
Adversarial Search. Regular Tic Tac Toe Play a few games. –What is the expected outcome –What kinds of moves “guarantee” that?
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
Adversarial Search and Game-Playing
Instructor: Vincent Conitzer
Optimizing Minmax Alpha-Beta Pruning Real Time Decisions
Adversarial Search Chapter 5.
Games & Adversarial Search
Instructor: Vincent Conitzer
The Alpha-Beta Procedure
Instructor: Vincent Conitzer
Games & Adversarial Search
CS51A David Kauchak Spring 2019
Games & Adversarial Search
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

Artificial Intelligence in Game Design Lecture 22: Heuristics and Other Ideas in Board Games

Good and Bad Heuristics Heuristic for evaluating board must be accurate –Directly related to “likelihood” of win –Inversely related to “distance” from win Example: TicTacToe heuristic H(board) = 2 × # of possible rows/columns/diagonals where X could win in one move × # of possible rows/columns/diagonals where X could win in two moves × # of possible rows/columns/diagonals where O could win in one move × # of possible rows/columns/diagonals where O could win in two moves Gives these two boards same measure -2 X OX OXO OX X OXO Guaranteed loss for AI

Good and Bad Heuristics Better heuristic must take this into account! H(board) = –if my move next and I have 2 in row  MAXINT –if opponent move next and they have 2 in row  -MAXINT –if my move and opponent has > 1 instance of 2 in row  -MAXINT –if opponent move and I have > 1 instance of 2 in row  MAXINT –else (above function) Will work better, but more complex to compute! Speed to compute heuristic value Accuracy of heuristic value tradeoff

Linear Heuristic Functions Heuristic is some function of individual pieces on board –Usually weighted in some way –Overall board value = Σ H(piece i ) i Very fast to compute Example: Chess –Pieces have “standard values” –Heuristic = sum of AI pieces – sum of player pieces 95331

Linear Heuristic Functions Often based on position of pieces on board Examples: –Games where purpose is to move all pieces to some goal Backgammon Sorry Can often send other player back to “start” –Heuristic value = total distance of AI pieces from goal – total distance of player pieces from goal

Linear Heuristic Functions Terrain based games Heuristic value = total value of regions player has influence over –Bias AI towards taking high ground, cover, etc. B Height = 50 C Height = 20 A Height = 40 D Height = 30 Green positions have total influence of 7 Red positions have total influence of -3 Total heuristic value for green = 10 Total Influence ABCD

Linear Heuristic Functions Reversi (Othello) –Board value of based entirely on piece positions Corners very valuable (can’t be flipped) Sides somewhat valuable (very difficult to flip) Middle little value (will easily be flipped) –H(board) = C 1 * number of pieces in corner + C 2 * number of pieces on side + C 3 * number of pieces in middle + C 1 >> C 2 >> C 3 Reversi easiest type of game for AI –Low branching factor (5 to 15 possible moves) –Good heuristics –Single move can greatly change board Hard for human player to see Easy for MinMax lookahead to see

Nonlinear Heuristics Based on relationships between pieces on board Simple example: –Prefer chess pieces to protect one another –Piece value = piece value * 1.5 if protected by another –Piece value = piece value * 0.5 if not protected and threatened by opponent piece

Nonlinear Heuristics Drawback: Usually much more expensive to compute –n pieces  O (n 2 ) relationships between them May be able to explore more moves with simpler (linear) heuristic Example: –Simple linear heuristic takes k ms to evaluate per board –Complex nonlinear heuristic takes 400k ms to evaluate per board –Average of 20 possible next moves per board 400 possible next two moves –On average, could explore two additional moves if use linear heuristic

Linear vs. Nonlinear Heuristics Nonlinear heuristics often detect future changes in board –Pieces threatened by others might be captured –Pieces protected by others less likely to be captured Often see effect of nonlinear heuristic with additional levels of linear heuristic Example: “knight fork” in chess –Can look for this using nonlinear heuristic

Linear vs. Nonlinear Heuristics Searching 2 more moves in game tree will show result of knight fork –Shows state where rook now gone –Higher value for linear heuristic based on piece values –MinMax then gives “knight fork” state a high value –Using simple heuristic might make this possible Board with knight fork Board with knight threatening rook Black moves king Other moves give checkmate (value = MAXINT) Board with black rook gone Knight takes rook High value h Max level Min level

Linearizing Nonlinear Heuristics Worth working hard to rethink nonlinear heuristcs Example: Make linear in terms of positions rather than pieces –Backgammon: Prefer pieces in groups of >1 to prevent being sent back to start –Simply count positions with only one piece rather than comparing pieces

Horizon Effect Major weakness of purely linear heuristics based on piece values –“Throwing good money after bad” May need to recognize when move does not improve board position Chess example: Queen pinned by bishopsCould move rook in wayBut rook captured and will still lose queen

Horizon Effect Board with queen pinned Queen takes bishop Board with queen lost Bishop takes queen White down 6 points Move rook in front Board with rook lost Bishop takes rook White down 5 points Queen takes bishop Board with queen and rook lost Bishop takes queen White now down 11 points! This looks worse at cutoff level, but is actually best in long run!

Data-Driven Approaches Basing actions on known strategies rather than tree search Opening books of initial moves (Chess) –Often moves at start of Grandmaster match Each book consists of: –List of moves –Evaluation of final outcome Should we follow this strategy Allows faster processing –No need to search game tree until end of sequence –Can just use evaluation as heuristic Current board Boards in sequence End of sequence Now start branching

Opening Books Choose as own opening strategy Must have good final evaluation! –Make moves according to script –If opponent follows script, keep following –If opponent leaves script, start MinMax Will probably be in same way that benefits us Must also recognize when opponent is using an opening book –Keep database of moves in opening books –Match current board to those in database to find whether it is part of a sequence –If so, make decision about whether following script is good idea

Other Set Plays End Games –Many games have different strategies when few pieces left Forcing checkmate in chess Kings vs. kings in checkers Getting last pieces home in backgammon –Recognize based on pieces left –Follow set strategies Set Evaluation Values –No heuristic evaluation of board – instead, match board to database to get evaluation –Works best if can match subboards Example: Edge configurations in Othello This edge has a known value

Alternative Approaches Go –19  19 board  branching factor of 361 –Impossible for MinMax Only known approaches based on template matching –Look in local area for configurations that match known strategies –Still very open problem Best AI for Go only plays at amateur level