873 8 [8,.] [.,8]. 8 916 9 [9,.] 8 [.,8] 24 [2,.] 1 [4,.]4 4.

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
February 7, 2006AI: Chapter 6: Adversarial Search1 Artificial Intelligence Chapter 6: Adversarial Search Michael Scherger Department of Computer Science.
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4. Warm Up Let’s play some games!
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
CMSC 671 Fall 2001 Class #8 – Thursday, September 27.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Chess Game By\ Amr Eledkawy Ibrahim Shawky Ali Abdelmoaty Amany Hussam Amel Mostafa.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CMSC 463 Chapter 5: Game Playing Prof. Adam Anthony.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Alpha Beta Search how computers make decisions Elena Eneva 10 October 2001.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
THE RENJU GAME BY ABHISHEK JAIN, PRANSHU GUPTA & RHYTHM DAS PCLUB SUMMER PROJECT PRESENTATION JUNE, L7 IIT KANPUR MENTOR – SANIL JAIN.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
ICS-270a:Notes 5: 1 Notes 5: Game-Playing ICS 270a Winter 2003.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Minimax.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
AD FOR GAMES Lecture 4. M INIMAX AND A LPHA -B ETA R EDUCTION Borrows from Spring 2006 CS 440 Lecture Slides.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Traditional game playing 2 player adversarial (win => lose) based on search but... huge game trees can't be fully explored.
Mark Dunlop, Computer and Information Sciences, Strathclyde University 1 Algorithms & Complexity 5 Games Mark D Dunlop.
University of Amsterdam Search, Navigate, and Actuate – Search through Game Trees Arnoud Visser 1 Game Playing Search the action space of 2 players Russell.
Notes on Game Playing by Yun Peng of theYun Peng University of Maryland Baltimore County.
Connect Four AI Robert Burns and Brett Crawford. Connect Four  A board with at least six rows and seven columns  Two players: one with red discs and.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Search exploring the consequences of possible actions.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
CMSC 421: Intro to Artificial Intelligence October 6, 2003 Lecture 7: Games Professor: Bonnie J. Dorr TA: Nate Waisbrot.
Adversarial Search 2 (Game Playing)
Parallel Programming in Chess Simulations Part 2 Tyler Patton.
D Goforth - COSC 4117, fall OK administrivia  Exam format – take home, open book  Suicide rule for King’s court Illegal moves cannot move last.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
CIS 350 – I Game Programming Instructor: Rolf Lakaemper.
1 Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game.
Game Algorithms Prepared for COSC 6111 By Stephanie Wilson November 15th, 2006.
Game Playing Why do AI researchers study game playing?
Adversarial Search and Game-Playing
By: Casey Savage, Hayley Stueber, and James Olson
Iterative Deepening A*
PENGANTAR INTELIJENSIA BUATAN (64A614)
Game Playing: Adversarial Search
Kevin Mason Michael Suggs
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Search and Game Playing
Approaches to search Simple search Heuristic search Genetic search
Game Playing: Adversarial Search
CSE (c) S. Tanimoto, 2007 Search 2: AlphaBeta Pruning
Game Playing Chapter 5.
Presentation transcript:

873 8 [8,.] [.,8]

[9,.]

8 [.,8] 24 [2,.] 1 [4,.]4 4

135 5 [1,.] [.,5] 4 [3,.]

[.,5] [3,.] 4 [9,.]

5 [.,5] 65 [6,.]

5 4 … 3 5

done Continue with Prolog code for minimax Jump to alpha-beta pruning

Certainly the worst value/move /* Uses: move(+Pos, -Move) :- Move is a legal move in position Pos. move(+Move, +Pos, -Pos1) :- Making Move in position Pos results in position Pos1. value(+Pos, -V) :- V is the static value of position Pos for player 1. Should be between -999 and +999, where +999 is best for player 1. */ minimax(Pos, Move, Depth) :- minimax(Depth, Pos, 1, _, Move). /* minimax(+Depth, +Position, +Player, -BestValue, -BestMove) :- Chooses the BestMove from the from the current Position using the minimax algorithm searching Depth ply ahead. Player indicates if this move is by player (1) or opponent (-1). */ minimax(0, Position, Player, Value, _) :- value(Position, V), Value is V*Player. % Value is from the current player’s perspective. minimax(D, Position, Player, Value, Move) :- D > 0, D1 is D - 1, findall(M, move(Position, M), Moves), % There must be at least one move! minimax(Moves, Position, D1, Player, -1000, nil, Value, Move).

minimax(0, Position, Player, Value, _) :- value(Position, V), Value is V*Player. % Value is from the current player’s perspective. minimax(D, Position, Player, Value, Move) :- D > 0, D1 is D - 1, findall(M, move(Position, M), Moves), % There must be at least one move! minimax(Moves, Position, D1, Player, -1000, nil, Value, Move). /* minimax(+Moves,+Position,+Depth,+Player,+Value0,+Move0,-BestValue,-BestMove) Chooses the Best move from the list of Moves from the current Position using the minimax algorithm searching Depth ply ahead. Player indicates if we are currently minimizing (-1) or maximizing (1). Move0 records the best move found so far and Value0 its value. */ minimax([], _, _, _, Value, Best, Value, Best). minimax([Move|Moves],Position,D,Player, Value0,Move0,BestValue,BestMove):- move(Move, Position, Position1), Opponent is -Player, minimax(D, Position1, Opponent, OppValue, _OppMove), Value is -OppValue, ( Value > Value0 -> minimax(Moves,Position,D,Player, Value,Move,BestValue,BestMove). ; minimax(Moves,Position,D,Player, Value0,Move0,BestValue,BestMove). ).

873 8 [8,.] [.,8]

8 9 [9,8] [.,8] X

8 24 [2,8] 1 [4,8]4 4 [.,8]

135 5 [4,.] [4,5] 4 [4,.]

5 39 X [4,5] 4 [9,5] [4,.]

5 [4,5] 6 [6,5] X 5 4 [4,5] [4,.]

5 4 [5,.] 2 X

< 5 [5,.] [5, <5] X X

done

1000 serves as infinity /* Uses: move(+Pos, -Move) :- Move is a legal move in position Pos. move(+Move, +Pos, -Pos1) :- Making Move in position Pos results in position Pos1. value(+Pos, -V) :- V is the static value of position Pos for player 1. Should be between -999 and +999, where +999 is best for player 1. */ alph_bet(Pos, Move, Depth) :- alph_bet(Depth, Pos, 1, -1000, 1000, _, Move). /* alph_bet(+Depth, +Position, +Player, +Alpha, +Beta, -BestValue, -BestMove) :- Chooses the BestMove from the from the current Position using the alpha beta algorithm searching Depth ply ahead. Player indicates if the next move is by player (1) or opponent (-1). */ alph_bet(0, Position, Player, _, _, Value, _) :- value(Position, V), Value is V*Player. alph_bet(D, Position, Player, Alpha, Beta, Value, Move) :- D > 0, D1 is D - 1, findall(M, move(Position, M), Moves), alph_bet(Moves, Position, D1, Player, Alpha, Beta, nil, Value, Move).

/* alph_bet(+Moves,+Position,+Depth,+Player,+Alpha,+Beta,+Move0, -BestValue,-BestMove) Chooses the Best move from the list of Moves from the current Position using the alpha beta algorithm searching Depth ply ahead. Player indicates if the next move is by player (1) or opponent (-1). Move0 records the best move found so far and Alpha its value. If a value >= Beta is found, then this position is too good to be true: the opponent will not move us into this position. */ alph_bet([], _, _, _, Value, _, Best, Value, Best). alph_bet([Move|Moves], Position, D, Player, Alpha, Beta, Move0, BestValue, BestMove):- move(Move, Position, Position1), Opponent is -Player, OppAlpha is -Beta, OppBeta is -Alpha, alph_bet(D, Position1, Opponent, OppAlpha, OppBeta, OppValue, _OppMove), Value is -OppValue, ( Value >= Beta -> BestValue = Value, BestMove = Move % abort: too good to be true ; Value > Alpha -> alph_bet(Moves,Position,D,Player,Value,Beta,Move,BestValue,BestMove) ; alph_bet(Moves,Position,D,Player,Alpha,Beta,Move0,BestValue,BestMove) ).

Othello

> prolog –l /it/kurs/logpro/othello/play_game.pl … | ?- start_game. Select white player. (1) human (2) program |: 2. White player full program name : |: std. … Black players full program name : |: ’/home/ / /myothello.pl’. … (Othello window pops up.)

Othello White player programShell programBlack player program initialize(white,SW) initialize(black,SB) SWSB best_move(SB, Move) Move=6-5 Ask move Initialize move(6-5, SB, NSB) Execute move (shell executes move in Othello window) opponent_move(6-5, SW, NSW)

Othello White player programShell programBlack player program game_over(SB, …) Game over? no best_move(SW, Move) SWSB Move = c-r Ask move Execute move move(c-r, SW, NSW) opponent_move(c-r, SB, NSB)