CS 188: Artificial Intelligence Fall 2009

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
CS 484 – Artificial Intelligence
Adversarial Search Chapter 6 Section 1 – 4.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Adversarial Search: Game Playing Reading: Chapter next time.
Adversarial Search Game Playing Chapter 6. Outline Games Perfect Play –Minimax decisions –α-β pruning Resource Limits and Approximate Evaluation Games.
Adversarial Search CSE 473 University of Washington.
Adversarial Search Chapter 6.
Advanced Artificial Intelligence
Adversarial Search 對抗搜尋. Outline  Optimal decisions  α-β pruning  Imperfect, real-time decisions.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
EIE426-AICV 1 Game Playing Filename: eie426-game-playing-0809.ppt.
G51IAI Introduction to AI Minmax and Alpha Beta Pruning Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Lecture 13 Last time: Games, minimax, alpha-beta Today: Finish off games, summary.
This time: Outline Game playing The minimax algorithm
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
CS 188: Artificial Intelligence Fall 2009
Games & Adversarial Search Chapter 6 Section 1 – 4.
CS 188: Artificial Intelligence Spring 2007 Lecture 7: CSP-II and Adversarial Search 2/6/2007 Srini Narayanan – ICSI and UC Berkeley Many slides over the.
CSE 473: Artificial Intelligence Spring 2012
CSE 473: Artificial Intelligence Autumn 2011 Adversarial Search Luke Zettlemoyer Based on slides from Dan Klein Many slides over the course adapted from.
CSE 573: Artificial Intelligence Autumn 2012
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
Quiz 4 : CSP  Tree-structured CSPs require only polynomial time.True  If a CSP is not tree-structured, you can always transform it into a tree. True.
Adversarial Search. Game playing Perfect play The minimax algorithm alpha-beta pruning Resource limitations Elements of chance Imperfect information.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Adversarial Search Chapter 6 Section 1 – 4. Search in an Adversarial Environment Iterative deepening and A* useful for single-agent search problems What.
CHAPTER 4 PROBABILITY THEORY SEARCH FOR GAMES. Representing Knowledge.
Quiz 4 : Minimax Minimax is a paranoid algorithm. True
Lecture 5: Solving CSPs Fast! 1/30/2012 Robert Pless – Wash U. Multiple slides over the course adapted from Kilian Weinberger, Dan Klein (or Stuart Russell.
CSE373: Data Structures & Algorithms Lecture 23: Intro to Artificial Intelligence and Game Theory Based on slides adapted Luke Zettlemoyer, Dan Klein,
Paula Matuszek, CSC 8520, Fall Based in part on aima.eecs.berkeley.edu/slides-ppt 1 CS 8520: Artificial Intelligence Adversarial Search Paula Matuszek.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
CS 188: Artificial Intelligence Spring 2006 Lecture 23: Games 4/18/2006 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Fall 2006 Lecture 5: CSPs II 9/12/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
CS 188: Artificial Intelligence Fall 2006 Lecture 7: Adversarial Search 9/19/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
CS 188: Artificial Intelligence Fall 2008 Lecture 6: Adversarial Search 9/16/2008 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
Adversarial Search CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java.
5/4/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Game Playing Why do AI researchers study game playing?
Announcements Homework 1 Full assignment posted..
Adversarial Search Chapter 5.
CS 188: Artificial Intelligence Fall 2007
Games & Adversarial Search
Games & Adversarial Search
Adversarial Search.
Games & Adversarial Search
Games & Adversarial Search
Lecture 1C: Adversarial Search(Game)
Minimax strategies, alpha beta pruning
CS 188: Artificial Intelligence Fall 2007
Games & Adversarial Search
Minimax strategies, alpha beta pruning
Games & Adversarial Search
Adversarial Search Chapter 6 Section 1 – 4.
CS 188: Artificial Intelligence
Presentation transcript:

CS 188: Artificial Intelligence Fall 2009 Lecture 6: Adversarial Search 9/15/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore

Announcements Written 1 has been up (Search and CSPs) Project 2 will be up soon (Multi-Agent Pacman) Other annoucements: None yet

Today Finish up Search and CSPs Start on Adversarial Search

Tree-Structured CSPs Theorem: if the constraint graph has no loops, the CSP can be solved in O(n d2) time Compare to general CSPs, where worst-case time is O(dn) This property also applies to probabilistic reasoning (later): an important example of the relation between syntactic restrictions and the complexity of reasoning.

Tree-Structured CSPs Choose a variable as root, order variables from root to leaves such that every node’s parent precedes it in the ordering For i = n : 2, apply RemoveInconsistent(Parent(Xi),Xi) For i = 1 : n, assign Xi consistently with Parent(Xi) Runtime: O(n d2) (why?)

Tree-Structured CSPs Why does this work? Claim: After each node is processed leftward, all nodes to the right can be assigned in any way consistent with their parent. Proof: Induction on position Why doesn’t this algorithm work with loops? Note: we’ll see this basic idea again with Bayes’ nets

Nearly Tree-Structured CSPs Conditioning: instantiate a variable, prune its neighbors' domains Cutset conditioning: instantiate (in all ways) a set of variables such that the remaining constraint graph is a tree Cutset size c gives runtime O( (dc) (n-c) d2 ), very fast for small c

Tree Decompositions*     Create a tree-structured graph of overlapping subproblems, each is a mega-variable Solve each subproblem to enforce local constraints Solve the CSP over subproblem mega-variables using our efficient tree-structured CSP algorithm M1 M2 M3 M4 NT SA  WA Q SA  NT NSW SA  Q Q SA  NSW Agree on shared vars Agree on shared vars Agree on shared vars {(WA=r,SA=g,NT=b), (WA=b,SA=r,NT=g), …} {(NT=r,SA=g,Q=b), (NT=b,SA=g,Q=r), …} Agree: (M1,M2)  {((WA=g,SA=g,NT=g), (NT=g,SA=g,Q=g)), …}

Iterative Algorithms for CSPs Local search methods: typically work with “complete” states, i.e., all variables assigned To apply to CSPs: Start with some assignment with unsatisfied constraints Operators reassign variable values No fringe! Live on the edge. Variable selection: randomly select any conflicted variable Value selection by min-conflicts heuristic: Choose value that violates the fewest constraints I.e., hill climb with h(n) = total number of violated constraints

Example: 4-Queens States: 4 queens in 4 columns (44 = 256 states) Operators: move queen in column Goal test: no attacks Evaluation: c(n) = number of attacks [DEMO]

Performance of Min-Conflicts Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000) The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio

Hill Climbing Simple, general idea: Why can this be a terrible idea? Start wherever Always choose the best neighbor If no neighbors have better scores than current, quit Why can this be a terrible idea? Complete? Optimal? What’s good about it?

Hill Climbing Diagram Random restarts? Random sideways steps?

Simulated Annealing Idea: Escape local maxima by allowing downhill moves But make them rarer as time goes on

Summary CSPs are a special kind of search problem: States defined by values of a fixed set of variables Goal test defined by constraints on variable values Backtracking = depth-first search with incremental constraint checks Ordering: variable and value choice heuristics help significantly Filtering: forward checking, arc consistency prevent assignments that guarantee later failure Structure: Disconnected and tree-structured CSPs are efficient Iterative improvement: min-conflicts is usually effective in practice

Game Playing State-of-the-Art Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved! Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better, if less historic. Othello: Human champions refuse to compete against computers, which are too good. Go: Human champions are beginning to be challenged by machines, though the best humans still beat the best machines. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning. Pacman: unknown

GamesCrafters http://gamescrafters.berkeley.edu/

[DEMO: mystery pacman] Adversarial Search [DEMO: mystery pacman]

Game Playing Many different kinds of games! Axes: Deterministic or stochastic? One, two, or more players? Perfect information (can you see the state)? Want algorithms for calculating a strategy (policy) which recommends a move in each state

Deterministic Games Many possible formalizations, one is: States: S (start at s0) Players: P={1...N} (usually take turns) Actions: A (may depend on player / state) Transition Function: SxA  S Terminal Test: S  {t,f} Terminal Utilities: SxP  R Solution for a player is a policy: S  A

Deterministic Single-Player? Deterministic, single player, perfect information: Know the rules Know what actions do Know when you win E.g. Freecell, 8-Puzzle, Rubik’s cube … it’s just search! Slight reinterpretation: Each node stores a value: the best outcome it can reach This is the maximal outcome of its children (the max value) Note that we don’t have path sums as before (utilities at end) After search, can pick move that leads to best node win lose

Deterministic Two-Player E.g. tic-tac-toe, chess, checkers Zero-sum games One player maximizes result The other minimizes result Minimax search A state-space search tree Players alternate Each layer, or ply, consists of a round of moves* Choose move to position with highest minimax value = best achievable utility against best play max min 8 2 5 6 * Slightly different from the book definition

Tic-tac-toe Game Tree

Minimax Example

Minimax Search

Minimax Properties Optimal against a perfect player. Otherwise? Time complexity? O(bm) Space complexity? For chess, b  35, m  100 Exact solution is completely infeasible But, do we need to explore the whole tree? max min 10 10 9 100 [DEMO: minVsExp]

Resource Limits Cannot search to leaves Depth-limited search Instead, search a limited depth of tree Replace terminal utilities with an eval function for non-terminal positions Guarantee of optimal play is gone More plies makes a BIG difference [DEMO: limitedDepth] Example: Suppose we have 100 seconds, can explore 10K nodes / sec So can check 1M nodes per move - reaches about depth 8 – decent chess program max 4 min min -2 4 -1 -2 4 9 ? ? ? ?

Evaluation Functions Function which scores non-terminals Ideal function: returns the utility of the position In practice: typically weighted linear sum of features: e.g. f1(s) = (num white queens – num black queens), etc.

[DEMO: thrashing, smart ghosts] Evaluation for Pacman [DEMO: thrashing, smart ghosts]

Why Pacman Starves He knows his score will go up by eating the dot now He knows his score will go up just as much by eating the dot later on There are no point-scoring opportunities after eating the dot Therefore, waiting seems just as good as eating

Iterative Deepening Iterative deepening uses DFS as a subroutine: b Do a DFS which only searches for paths of length 1 or less. (DFS gives up on any path of length 2) If “1” failed, do a DFS which only searches paths of length 2 or less. If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. Why do we want to do this for multiplayer games? b …

Pruning in Minimax Search [3,+] [3,14] [3,5] [3,3] [-,+] 3 2 14 [3,3] [-,3] [-,2] [-,14] [2,2] [-,5] 12 8 5 2

- Pruning Example

- Pruning General configuration  is the best value that MAX can get at any choice point along the current path If n becomes worse than , MAX will avoid it, so can stop considering n’s other children Define  similarly for MIN Player Opponent  Player Opponent n

- Pruning Pseudocode v

- Pruning Properties Pruning has no effect on final result Good move ordering improves effectiveness of pruning With “perfect ordering”: Time complexity drops to O(bm/2) Doubles solvable depth Full search of, e.g. chess, is still hopeless! A simple example of metareasoning, here reasoning about which computations are relevant

Non-Zero-Sum Games Similar to minimax: Utilities are now tuples Each player maximizes their own entry at each node Propagate (or back up) nodes from children 1,2,6 4,3,2 6,1,2 7,4,1 5,1,1 1,5,2 7,7,1 5,4,5

Stochastic Single-Player What if we don’t know what the result of an action will be? E.g., In solitaire, shuffle is unknown In minesweeper, mine locations In pacman, ghosts! Can do expectimax search Chance nodes, like actions except the environment controls the action chosen Calculate utility for each node Max nodes as in search Chance nodes take average (expectation) of value of children Later, we’ll learn how to formalize this as a Markov Decision Process max average 10 4 5 7 [DEMO: minVsExp]

Stochastic Two-Player E.g. backgammon Expectiminimax (!) Environment is an extra player that moves after each agent Chance nodes take expectations, otherwise like minimax

Stochastic Two-Player Dice rolls increase b: 21 possible rolls with 2 dice Backgammon  20 legal moves Depth 4 = 20 x (21 x 20)3 1.2 x 109 As depth increases, probability of reaching a given node shrinks So value of lookahead is diminished So limiting depth is less damaging But pruning is less possible… TDGammon uses depth-2 search + very good eval function + reinforcement learning: world-champion level play

What’s Next? Make sure you know what: Next topics: Probabilities are Expectations are Next topics: Dealing with uncertainty How to learn evaluation functions Markov Decision Processes

Local Search Methods Tree search keeps unexplored alternatives on the fringe (ensures completeness) Local search: improve what you have until you can’t make it better Generally much faster and more memory efficient (but incomplete)

Types of Search Problems Planning problems: We want a path to a solution (examples?) Usually want an optimal path Incremental formulations Identification problems: We actually just want to know what the goal is (examples?) Usually want an optimal goal Complete-state formulations Iterative improvement algorithms

Simulated Annealing Theoretical guarantee: Stationary distribution: If T decreased slowly enough, will converge to optimal state! Is this an interesting guarantee? Sounds like magic, but reality is reality: The more downhill steps you need to escape, the less likely you are to ever make them all in a row People think hard about ridge operators which let you jump around the space in better ways

Genetic Algorithms Genetic algorithms use a natural selection metaphor Like beam search (selection), but also have pairwise crossover operators, with optional mutation Probably the most misunderstood, misapplied (and even maligned) technique around!

Example: N-Queens Why does crossover make sense here? When wouldn’t it make sense? What would mutation be? What would a good fitness function be?