By: Casey Savage, Hayley Stueber, and James Olson

Slides:



Advertisements
Similar presentations
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
Advertisements

Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
For Friday Finish chapter 5 Program 1, Milestone 1 due.
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
CS 484 – Artificial Intelligence
Lecture 12 Last time: CSPs, backtracking, forward checking Today: Game Playing.
Chess Game By\ Amr Eledkawy Ibrahim Shawky Ali Abdelmoaty Amany Hussam Amel Mostafa.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Artificial Intelligence in Game Design
State Space 4 Chapter 4 Adversarial Games. Two Flavors Games of Perfect Information ◦Each player knows everything that can be known ◦Chess, Othello Games.
This time: Outline Game playing The minimax algorithm
Game Playing CSC361 AI CSC361: Game Playing.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Find a Path s A D B E C F G Heuristically Informed Methods  Which node do I expand next?  What information can I use to guide this.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Game Playing: Adversarial Search Chapter 6. Why study games Fun Clear criteria for success Interesting, hard problems which require minimal “initial structure”
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Minimax.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Games. Adversaries Consider the process of reasoning when an adversary is trying to defeat our efforts In game playing situations one searches down the.
Game Playing. Introduction One of the earliest areas in artificial intelligence is game playing. Two-person zero-sum game. Games for which the state space.
CSCI 4310 Lecture 6: Adversarial Tree Search. Book Winston Chapter 6.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Adversarial Games. Two Flavors  Perfect Information –everything that can be known is known –Chess, Othello  Imperfect Information –Player’s have each.
Game tree search Chapter 6 (6.1 to 6.3 and 6.6) cover games. 6.6 covers state of the art game players in particular. 6.5 covers games that involve uncertainty.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Will Britt and Bryan Silinski
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
AIMA Code: Adversarial Search CSE-391: Artificial Intelligence University of Pennsylvania Matt Huenerfauth February 2005.
Adversarial Search 2 (Game Playing)
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Artificial Intelligence in Game Design Board Games and the MinMax Algorithm.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
1 Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game Decisions in games Minimax algorithm  -  algorithm Tic-Tac-Toe game.
Adversarial Search and Game-Playing
Last time: search strategies
Iterative Deepening A*
Two-Player Games A4B33ZUI, LS 2016
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
Pengantar Kecerdasan Buatan
Game Playing.
State Space 4 Chapter 4 Adversarial Games.
Expectimax Lirong Xia. Expectimax Lirong Xia Project 2 MAX player: Pacman Question 1-3: Multiple MIN players: ghosts Extend classical minimax search.
Dakota Ewigman Jacob Zimmermann
Games with Chance Other Search Algorithms
Game playing.
Tutorial 5 Adversary Search
Alpha-Beta Search.
Games & Adversarial Search
Kevin Mason Michael Suggs
NIM - a two person game n objects are in one pile
Alpha-Beta Search.
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Alpha-Beta Search.
Pruned Search Strategies
Alpha-Beta Search.
Based on slides by: Rob Powers Ian Gent
Alpha-Beta Search.
Unit II Game Playing.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning
Presentation transcript:

By: Casey Savage, Hayley Stueber, and James Olson 2048 Game Solver By: Casey Savage, Hayley Stueber, and James Olson

About 2048 Goal of the Game: create the 2048 tile How to Play the Game: slide tiles left, right, up, or down to combine like numbers: For example, two adjacent 2’s can be combined to create a 4, two adjacent 4’s can make an 8, and so on. Tiles slide in the specified direction until they are stopped by a tile or the edge A new tile with the value of 2 or 4 will appear in an empty spot on the board. The player loses if there are no empty spaces left on the board.

Previous Work Matt Overlan: Vasilis Vryniotis: created a code using a minimax algorithm High probability of winning, but very slow, heavily due to its animation Vasilis Vryniotis: created a problem-solver for 2048 in Java using an alpha-beta pruning algorithm Very slow and ineffective problem-solver that would not display its process. Gayas Chowdhury and Vignesh Dhamodaran Created a game solver code using Expectimax algorithm

Problem Statement Create a game solver that attempts to solve the game 2048 by achieving a 2048 tile while maximizing the efficiency and success rate To do this we examined: Minimax Algorithm Alpha Beta Pruning Algorithm Expectimax Algorithm

2048 Game tree Diagram Top Node Depth 1 Depth 2 Depth 3 …Depth (n)

Minimax Algorithm Has the “player” select the most effective move Anticipates the “opponent,” or the computer, placing the next piece in the position most inconvenient for the player, but, in this case, we simply used the game’s method for placing the next tile.

Minimax Psuedocode function minimax(node, depth, maximizingPlayer) if depth = 0 or node is a terminal node: return the heuristic value of node if maximizingPlayer: bestValue := -∞ For each child of node val := minimax(child, depth – 1, FALSE)) bestValue := max(bestValue, val) return bestValue else bestValue := ∞ for each child of node val := minimax(child, depth – 1, TRUE)) bestValue := min(bestValue, val) (* Initial call for maximizing player *): minimax(origin, depth, TRUE)

Results and Analysis of Minimax Big O Algorithm Complexity of: O(n^d) n = number of moves in a game d = depth of the search 40% of trials fail with highest tile as 512. 50% success rate.

Alpha-Beta Pruning Algorithm An expansion of the Minimax Algorithm Main Goal: decrease the number of nodes to evaluate by comparing alpha and beta values and “pruning” sub trees

Alpha-Beta Pruning Pseudocode Function alphabeta(node, depth, alpha, beta maximizingPlayer): if(max(state, α, β, depth)): If (depth ==0): return value(state) For s in Successors(state): α = MAX(α, MIN(state, α, β, depth) If α >=  β: return α else: β= MIN(β, MAX(state, α, β, depth-1) If β >= α : return β α: The best value for MAX seen so far β: The best value for MIN seen so far state: game position

Alpha-Beta Algorithm Results Using a depth of 7: Success rate: 60-70% Fails at 1024 tile about 90% of the time Average Time to Solve a Game: 72 seconds Big O Algorithm Complexity of: O(n^(d/2)) n = number of moves in a game d = depth of the search

Expectimax Algorithm Expectimax is also a variation of minimax game tree algorithm Expectimax has chance nodes in addition to min and max, which takes the expected value of random event that is about to occur The random event being the next randomly placed 2 or 4 tile on the 2048 game board The next move sliding board Right, Left, Up, or Down is chosen by the highest expected value of all the possibilities. Expected value is calculated by weighted average of values of the children

Expectimax Algorithm The expectimax algorithm evaluates each node by calculating all of possible tile placements, weighted by the probability of the occurrence of each tile The percent of a 2 tile being placed is 90% and 4 tile is 10% The algorithm then chooses the case with the best probability of occurrence Depending on the set depth(d) it will calculate the probability to d amount of future moves

Expectimax Algorithm Pseudocode function expectimax(node, depth, player) else if random event at node if node is a terminal node or depth = 0 // Return weighted average of all child nodes' values let α := 0 return the heuristic value of node if the adversary is to play at node α := α + (Probability[child] * expectimax(child, depth-1)) // Return value of minimum-valued child node return α let α := +∞ foreach child of node α := min(α, expectimax(child, depth-1)) else if we are to play at node // Return value of maximum-valued child node let α := -∞ α := max(α, expectimax(child, depth-1))

Results and Analysis of Expectimax Expectimax Algorithm with a depth of 7 had a success rate of 80% wins The lowest tile to fail on was 1024 Reached a 4096 tile 30% of the time Big O Algorithm Complexity of: O(n^d) d = depth of the expectimax search n = number of moves in a game

Results And Conclusion Expectimax proved to have the highest success rate, but had an incredibly slow runtime. Alpha-Beta pruning was less successful than expectimax, but had an incredibly quick runtime. The minimax algorithm was least effective in success rate, but quicker than expectimax.

Future Work Improve the Speed: Improving the speed of the algorithm will allow you to use larger depth and thus get better accuracy. Tune Heuristics: One can experiment with the way that the scores are calculated, the weights and the board characteristics that are taken into account.

Questions What limits a minimax algorithm? The depth of the search What are the differences between the Minimax, Alpha Beta Pruning, and Expectimax algorithms? The Alpha Beta Pruning algorithm expands on the Minimax algorithm by eliminating branches to evaluate to hopefully decrease runtime. The Expectimax algorithm add chance nodes to the Minimax algorithm to increase the chances of getting a winning score by calculating probabilities. What is the time complexity of the Expectimax algorithm? O(n^d) As the tree depth increases, how does it affect your algorithm? It increases the likelihood of a winning outcome, but also increases the time it takes to complete.