Presentation is loading. Please wait.

Presentation is loading. Please wait.

Game Search & Constrained Search CMSC 25000 Artificial Intelligence January 18, 2005.

Similar presentations


Presentation on theme: "Game Search & Constrained Search CMSC 25000 Artificial Intelligence January 18, 2005."— Presentation transcript:

1 Game Search & Constrained Search CMSC 25000 Artificial Intelligence January 18, 2005

2 Roadmap I Searching for the right move –Review adversial search, minimax –Alpha-beta pruning, optimality –State-of-the-art: Specialized techniques –Dealing with chance

3 Games as Search Nodes = Board Positions Each ply (depth + 1) = Move Special feature: –Two players, adversial Static evaluation function –Instantaneous assessment of board configuration –NOT perfect (maybe not even very good)

4 Minimax Lookahead Modeling adversarial players: –Maximizer = positive values –Minimizer = negative values Decisions depend on choices of other player Look forward to some limit –Static evaluate at limit –Propagate up via minimax

5 Minimax Procedure If at limit of search, compute static value Relative to player If minimizing level, do minimax –Report minimum If maximizing level, do minimax –Report maximum

6 Minimax Search

7 Minimax Analysis Complete: –Yes, if finite tree Optimal: –Yes, if optimal opponent Time: –b^m Space: –bm (progressive deepening DFS) Practically: Chess: b~35, m~100 –Complete solution is impossible ~2.5*10^154

8 Minimax Example MAX MIN 2718 21 2

9 Alpha-Beta Pruning Alpha-beta principle: If you know it’s bad, don’t waste time finding out HOW bad May eliminate some static evaluations May eliminate some node expansions

10 Simple Alpha-Beta Example MAX MIN 27 2 >=2 1 <=1 2

11 Alpha-Beta Pruning

12

13

14

15

16 Alpha-Beta Procedure If level=TOP_LEVEL, alpha = NEGMAX; beta= POSMAX If (reached Search-limit), compute & return static value of current If level is minimizing level, While more children to explore AND alpha < beta ab = alpha-beta(child) if (ab < beta), then beta = ab Report beta If level is maximizing level, While more children to explore AND alpha < beta ab = alpha-beta(child) if (ab > alpha), then alpha = ab Report alpha

17 Alpha-Beta Pruning Analysis Worst case: –Bad ordering: Alpha-beta prunes NO nodes Best case: –Assume cooperative oracle orders nodes Best value on left “If an opponent has some response that makes move bad no matter what the moving player does, then the move is bad.” Implies: check move where opposing player has choice, check all own moves

18 Optimal Alpha-Beta Ordering 14327561089131112 353637383940323334293031262728232425141516171819202122 14 15 16 17 18 19 20 21 22 13 14 15 26 27 28 29 30 31 13 14 15 35 36 37 38 39 40

19 Optimal Ordering Alpha-Beta Significant reduction of work: –11 of 27 static evaluations Lower bound on # of static evaluations: –if d is even, s = 2*b^d/2-1 –if d is odd, s = b^(d+1)/2+b^(d-1)/2-1 Upper bound on # of static evaluations: –b^d Reality: somewhere between the two –Typically closer to best than worst

20 Heuristic Game Search Handling time pressure –Focus search –Be reasonably sure “best” option found is likely to be a good option. Progressive deepening –Always having a good move ready Singular extensions –Follow out stand-out moves

21 Progressive Deepening Problem: Timed turns –Limited depth If too conservative, too shallow If too generous, won’t finish Solution: –Always have a (reasonably good) move ready –Search at progressively greater depths: 1,2,3,4,5…..

22 Progressive Deepening Question: Aren’t we wasting a lot of work? –E.g. cost of intermediate depths Answer: (surprisingly) No! –Assume cost of static evaluations dominates –Last ply (depth d): Cost = b^d –Preceding plies: b^0 + b^1+…b^(d-1) (b^d - 1)/(b -1) –Ratio of last ply cost/all preceding ~ b - 1 –For large branching factors, prior work small relative to final ply

23 Singular Extensions Problem: Explore to some depth, but things change a lot in next ply –False sense of security –aka “horizon effect” Solution: “Singular extensions” –If static value stands out, follow it out –Typically, “forced” moves: E.g. follow out captures

24 Additional Pruning Heuristics Tapered search: –Keep more branches for higher ranked children Rank nodes cheaply Rule out moves that look bad Problem: –Heuristic: May be misleading Could miss good moves

25 Due to Russell and Norvig Deterministic Games

26 Games with Chance Many games mix chance and strategy –E.g. Backgammon –Combine dice rolls + opponent moves Modeling chance in game tree –For each ply, add another ply of “chance nodes” –Represent alternative rolls of dice One branch per roll Associate probability of roll with branch

27 Expectiminimax:Minimax+Chance Adding chance to minimax –For each roll, compute max/min as before Computing values at chance nodes –Calculate EXPECTED value –Sum of branches Weight by probability of branch

28 Expecti… Tree

29 Summary Game search: –Key features: Alternating, adversarial moves Minimax search: Models adversarial game Alpha-beta pruning: –If a branch is bad, don’t need to see how bad! –Exclude branch once know can’t change value –Can significantly reduce number of evaluations Heuristics: Search under pressure –Progressive deepening; Singular extensions

30 Constraint Propagation Artificial Intelligence CMSC 25000 January 18, 2005

31 Agenda Constraint Propagation: Motivation Constraint Propagation Example –Waltz line labeling Constraint Propagation Mechanisms –Arc consistency –CSP as search Forward-checking Back-jumping Summary

32 Leveraging Representation General search problems encode task- specific knowledge –Successor states, goal tests, state structure –“black box” wrt to search algorithm Constraint satisfaction fixes representation –Variables, values, constraints –Allows more efficient, structure-specific search

33 Constraint Satisfaction Problems Very general: Model wide range of tasks Key components: –Variables: Take on a value –Domains: Values that can be assigned to vars –Constraints: Restrictions on assignments Constraints are HARD –Not preferences: Must be observed E.g. Can’t schedule two classes: same room, same time

34 Constraint Satisfaction Problem Graph/Map Coloring: Label a graph such that no two adjacent vertexes same color –Variables: Vertexes –Domain: Colors –Constraints: If E(a,b), then C(a) != C(b)

35 Constraint Satisfaction Problems Resource Allocation: –Scheduling: N classes over M terms, 4 classes/term –Aircraft at airport gates Satisfiability: e.g. 3-SAT –Assignments of truth values to variables such that 1) consistent and 2) clauses true

36 Constraint Satisfaction Problem “N-Queens”: –Place N queens on an NxN chessboard such that none attacks another –Variables: Queens (1/column) –Domain: Rows –Constraints: Not same row, column, or diagonal

37 N-queens Q1 Q2 Q3 Q4

38 Constraint Satisfaction Problem Range of tasks: –Coloring, Resource Allocation, Satisfiability –Varying complexity: E.g. 3-SAT NP-complete Complexity: Property of problem NOT CSP Basic Structure: –Variables: Graph nodes, Classes, Boolean vars –Domains: Colors, Time slots, Truth values –Constraints: No two adjacent nodes with same color, No two classes in same time, consistent, satisfying ass’t

39 Problem Characteristics Values: –Finite? Infinite? Real? Discrete vs Continuous Constraints –Unary? Binary? N-ary? Note: all higher order constraints can be reduced to binary

40 Representational Advantages Simple goal test: –Complete, consistent assignment Complete: All variables have a value Consistent: No constraints violates Maximum depth? –Number of variables Search order? –Commutative, reduces branching –Strategy: Assign value to one variable at a time


Download ppt "Game Search & Constrained Search CMSC 25000 Artificial Intelligence January 18, 2005."

Similar presentations


Ads by Google