Download presentation
Presentation is loading. Please wait.
Published byBertram Phelps Modified over 8 years ago
1
Graph Search II GAM 376 Robin Burke
2
Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based games Alpha-beta search
3
Homework #3 // calculate leader's collision box (factor in closing velocity) // not accurate, but OK since the offset pursuit has us closing in on the leader // this a worst-case assumption. double boxLength = Prm.MinDetectionBoxLength + ((leader->Speed()+m_pVehicle->Speed())/leader->MaxSpeed()) * Prm.MinDetectionBoxLength; m_pVehicle->World()->TagVehiclesWithinViewRange((BaseGameEntity*)leader, boxLength); if (m_pVehicle->IsTagged()) { // if so, calculate position in leader space // and generate force in the opposite direction // proportional to how close the leader is. Vector2D LeaderMyPos = PointToLocalSpace(m_pVehicle->Pos(), leader->Heading(), leader->Side(), leader->Pos()); if (LeaderMyPos.x > 0 && fabs(LeaderMyPos.y) < boxWidth) // I'm in front and close { //the closer the leader is, the stronger the //steering force should be double multiplier = 5.0 * (boxLength - LeaderMyPos.x) / boxLength; // some tinkering required //calculate the lateral force double force = (boxWidth - fabs(LeaderMyPos.y)) * -multiplier; return leader->Side() * force; } // if not, standard offset pursuit
4
Uninformed Search Depth-first search We continue expanding the most recent edge until we run out of edges backtrack Characteristics minimum memory cost not guaranteed optimal may not terminate if search space is large Iterative-deepening search Do depth-first search with a limited search depth Increase the depth if it fails Characteristics optimal cannot get stuck minimum memory cost search work repeated at each level Breadth-first search We expand all edges at each node before we go on to the next Characteristics guaranteed optimal memory cost can be very high Djikstra's algorithm For graphs with weighted edges We expand the node that has the cheapest path Keep track of cheapest path to each edge Characteristics guaranteed optimal may consider irrelevant paths
5
Informed search A* Expand the "best" path so far instead of cheapest Best defined as the sum of the path cost and an estimate of the distance to the goal Estimate called the search heuristic Heuristic must underestimate the real cost otherwise, the search is not guaranteed to return the optimal path
6
Buckland's implementation
7
Beam search A* shares a problem with BFS memory cost too many nodes not yet expanded We can limit the set of nodes considered by cost throw out all paths of cost > C by size limit the size of the priority queue to size > L throw out nodes of index > L in the queue (not all priority queue algorithms can do this efficiently) Characteristics not guaranteed to be optimal not guaranteed to be complete limited memory cost Iterative Widening if we don't find a solution, increase C (or L) until we do
8
Bi-directional search Do two searches One starting from the beginning One from the end Look for overlap in the middle This reduces the search depth 2 * b n/2 instead of b n Characteristics can be used with DFS, A*, IDS not compatible with every search space
9
Turn-based games Use a graph to represent possible courses of action in a turn-based game Basic idea nodes represent the "state" of the game set of cards board position edges are moves changes in game state winning means reaching a particular state defined by the rules Winning strategy is a path through the edges / moves that leads to a winning state
10
(Practically) Infinite Search What if the goal state is so far away that search won't find it? chess = 10 43 states greater than the number of atoms in the solar system cannot be searched completely Pick a search depth estimate the "value" of the position at that depth treat that as the "result" of the search Search then becomes finding the best board position after k moves easy enough to store the best node so far and the path (move) to it
11
What about the opponent? Obviously, our opponent will not pick moves on the path to our winning game What move to predict? Worst case scenario the opponent will do what's best for him To win we need a strategy that will succeed even if the opponent plays his best
12
Mini-max assumption Assume that the opponent values the game state the opposite from you V me (state) = -V opp (state) At alternate nodes choose the state with maximum f for me or, choose the state with minimum f for the opponent
13
Mini-max algorithm Build tree with two types of nodes max nodes my move min nodes opp move Perform depth-first search, with iterative deepening Evaluate the board position at each node on a max node, use the max of all children as the value of the parent on a min node, use the min of all children as the value of the parent when search is complete the move that leads to the max child of the current node is the one to take Anytime this is an "anytime" algorithm you can stop the search at any time and you have a best estimate of your move (to some depth)
14
Problem I may waste time searching nodes that I would never use A* doesn't help since a position may be bad in one move but better after 3 sacrifice
15
Alpha-beta pruning reduces the size of the search space without changing the answer Simple idea don't consider any moves that are worse than ones you already know about
16
Animated example http://sern.ucalgary.ca/courses/CPSC/ 533/W99/presentations/L2_5B_Lima_ Neitz/abpruning.html
17
What about chance? In a game of chance there is a random element in the game process Backgammon the player can only make moves that use the outcome of the dice roll How do I know what my opponent will do? I don't but I can have an expectation
18
Expectiminimax The idea Game theoretic utility calculation Expected value = sum of all outcome values * the likelihood of occurrence The value of a node is not simply copied from the "best" child but summed over all possible children
19
Algorithm Tree has three types of nodes max nodes min nodes chance nodes Chance nodes calculate the expectation associated with all of the children
20
http://sern.ucalgary.ca/courses/cpsc/5 33/W99/presentations/L2_5B_Lima_N eitz/chance.html
21
Killer heuristic One additional optimization works well in chess Often a move that is really good or really bad Will be really good or bad in multiple board positions Example a move that captures my queen if my queen is under attack the move in which the opponent takes my queen will be his best move in most board positions except the positions in which I move the queen out of attack If a move leads to a really good or really bad position try it first when searching more likely to produce an extreme value that helps alpha- beta search
22
Midterm review Midterm topics Finite state machines Steering behaviors Graph search
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.