Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2013 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe
Game Playing with Chance Minimax trees work well when the game is deterministic, but many games have an element of chance. Include Chance nodes in tree Try to maximize/minimize the expected value Or, play pessimistic/optimistic approach
… Tree with Chance Nodes Max Chance … Min Chance For each die roll (red lines), evaluate each possible move (blue lines)
Expected Value For variable x, the Expected Value is: where Pr(x) is the probability of x occurring Example: rolling a pair of dice:
… Evaluating Tree Choosing a Maximum (same idea for Minimum): Chance … Min Chance Choosing a Maximum (same idea for Minimum): Evaluate same move from ALL chance nodes Find Expected Value for that move Choose largest expected value
More on Chance Rather than expected value, could use another approach Maximize worst case value Avoid catastrophe Give high weight if a very good position is possible “Knockout” move Form hybrid approach, weighting all of these options Note: time complexity increased to bmnm where n is the number of possible choices (m is depth)
Accounting for Time Often, real-world clock time is a factor in a game Limit time per move Limit total time allowed We might not be able to predict how far ahead we can look in a tree in a given time Different computers, board states, etc. A non-conservative estimate might violate the time constraint A conservative estimate is likely to waste time
Dealing with Time Iterative Deepening Compute to one level Then to two levels (repeating level 1) Order based on best move found previously… Then to three, etc. until time runs out Return the best known move so far at time limit Wastes time in repeat of earlier levels But, can be surprising small, especially with large branching factors Ordering from prior evaluations helps
More on Game Playing Rigorous approaches to imperfect information games still being studied. Assume random moves by opponent Assume some sort of model based on perfect information model Indications that often the behavior of the opponent is of more value than evaluating the board position
AI in Larger-Scale and Modern Computer Games The idealized situations described often don’t extend to extremely complex, and more continuous games. Even just listing possible moves can be difficult Larger situation can be broken down into subproblems Hierarchical approach Use of state diagrams Some subproblems are more easily solved e.g. path planning
AI in Larger-Scale and Modern Computer Games Use of simulation as opposed to deterministic solution Helps to explore large range of states Can create complex behavior wrapped up in autonomous agents Fun vs. Competent Goal of game is not necessarily for the computer to win Often a collection of ad-hoc rules Cheating allowed
General State Diagrams List of possible states one can reach in the game (nodes) Can be abstracted, general conditions Describe ways of moving from one state to another (edges) Not necessarily a set move, could be a general approach Forms a directed (and often cyclic) graph Our minimax tree is a state diagram, but we hide any cycles Sometimes want to avoid repeated states
State Diagram State C State I State A State B State D State J State E State H State G State K State F
Exploring the State Diagram Explore for solutions using BFS, DFS Depth limited search: DFS but to limited depth in tree Iterative Deepening search: As described before, but with DFS on graph, not just tree If a specific goal state, can use bidirectional search Search forward from start and backward from goal – try to meet in the middle. Think of maze puzzles
More informed search Traversing links, goal states not always equal Can have a heuristic function: h(x) = how close the state is to the “goal” state. Kind of like board evaluation function/utility function in game play Can use this to order other searches Can use this to create greedy approach