Download presentation
Presentation is loading. Please wait.
Published byJonah Lawrence Modified over 8 years ago
1
CSCE 552 Fall 2012 AI By Jijun Tang
2
Homework 3 List of AI techniques in games you have played; Select one game and discuss how AI enhances its game play or how its AI can be improved Due Nov 28th
3
Command Hierarchy Strategy for dealing with decisions at different levels From the general down to the foot soldier Modeled after military hierarchies General directs high-level strategy Foot soldier concentrates on combat
4
Dead Reckoning Method for predicting object ’ s future position based on current position, velocity and acceleration Works well since movement is generally close to a straight line over short time periods Can also give guidance to how far object could have moved Example: shooting game to estimate the leading distance
5
Emergent Behavior Behavior that wasn ’ t explicitly programmed Emerges from the interaction of simpler behaviors or rules Rules: seek food, avoid walls Can result in unanticipated individual or group behavior
6
Flocking/Formation
7
Mapping Example
8
Level-of-Detail AI Optimization technique like graphical LOD Only perform AI computations if player will notice For example Only compute detailed paths for visible agents Off-screen agents don ’ t think as often
9
Manager Task Assignment Manager organizes cooperation between agents Manager may be invisible in game Avoids complicated negotiation and communication between agents Manager identifies important tasks and assigns them to agents For example, a coach in an AI football team
10
Example Amit [to Steve]: Hello, friend! Steve [nods to Bryan]: Welcome to CGDC. [Amit exits left.] Amit.turns_towards(Steve); Amit.walks_within(3); Amit.says_to(Steve, "Hello, friend!"); Amit.waits(1); Steve.turns_towards(Bryan); Steve.walks_within(5); Steve.nods_to(Bryan); Steve.waits(1); Steve.says_to(Bryan, "Welcome to CGDC."); Amit.waits(3); Amit.face_direction(DIR_LEFT); Amit.exits();
11
Example Player escapes in combat, pop Combat off, goes to search; if not find the player, pop Search off, goes to patrol, …
12
Example
13
Bayesian Networks Performs humanlike reasoning when faced with uncertainty Potential for modeling what an AI should know about the player Alternative to cheating RTS Example AI can infer existence or nonexistence of player build units
14
Example
15
Bayesian Networks Inferring unobserved variables Parameter learning Structure learning
16
Blackboard Architecture Complex problem is posted on a shared communication space Agents propose solutions Solutions scored and selected Continues until problem is solved Alternatively, use concept to facilitate communication and cooperation
17
Decision Tree Learning Constructs a decision tree based on observed measurements from game world Best known game use: Black & White Creature would learn and form “ opinions ” Learned what to eat in the world based on feedback from the player and world
18
Filtered Randomness Filters randomness so that it appears random to players over short term Removes undesirable events Like coin coming up heads 8 times in a row Statistical randomness is largely preserved without gross peculiarities Example: In an FPS, opponents should randomly spawn from different locations (and never spawn from the same location more than 2 times in a row).
19
Genetic Algorithms Technique for search and optimization that uses evolutionary principles Good at finding a solution in complex or poorly understood search spaces Typically done offline before game ships Example: Game may have many settings for the AI, but interaction between settings makes it hard to find an optimal combination
20
Flowchat
21
N-Gram Statistical Prediction Technique to predict next value in a sequence In the sequence 18181810181, it would predict 8 as being the next value Example In street fighting game, player just did Low Kick followed by Low Punch Predict their next move and expect it
22
Neural Networks Complex non-linear functions that relate one or more inputs to an output Must be trained with numerous examples Training is computationally expensive making them unsuited for in-game learning Training can take place before game ships Once fixed, extremely cheap to compute
23
Example
24
Planning Planning is a search to find a series of actions that change the current world state into a desired world state Increasingly desirable as game worlds become more rich and complex Requires Good planning algorithm Good world representation Appropriate set of actions
25
Player Modeling Build a profile of the player ’ s behavior Continuously refine during gameplay Accumulate statistics and events Player model then used to adapt the AI Make the game easier: player is not good at handling some weapons, then avoid Make the game harder: player is not good at handling some weapons, exploit this weakness
26
Production (Expert) Systems Formal rule-based system Database of rules Database of facts Inference engine to decide which rules trigger – resolves conflicts between rules Example Soar used experiment with Quake 2 bots Upwards of 800 rules for competent opponent
27
Reinforcement Learning Machine learning technique Discovers solutions through trial and error Must reward and punish at appropriate times Can solve difficult or complex problems like physical control problems Useful when AI ’ s effects are uncertain or delayed
28
Reputation System Models player ’ s reputation within the game world Agents learn new facts by watching player or from gossip from other agents Based on what an agent knows Might be friendly toward player Might be hostile toward player Affords new gameplay opportunities “ Play nice OR make sure there are no witnesses ”
29
Smart Terrain Put intelligence into inanimate objects Agent asks object how to use it: how to open the door, how to set clock, etc Agents can use objects for which they weren ’ t originally programmed for Allows for expansion packs or user created objects, like in The Sims Enlightened by Affordance Theory Objects by their very design afford a very specific type of interaction
30
Speech Recognition Players can speak into microphone to control some aspect of gameplay Limited recognition means only simple commands possible Problems with different accents, different genders, different ages (child vs adult)
31
Text-to-Speech Turns ordinary text into synthesized speech Cheaper than hiring voice actors Quality of speech is still a problem Not particularly natural sounding Intonation problems Algorithms not good at “ voice acting ” : the mouth needs to be animated based on the text Large disc capacities make recording human voices not that big a problem No need to resort to worse sounding solution
32
Weakness Modification Learning General strategy to keep the AI from losing to the player in the same way every time Two main steps 1. Record a key gameplay state that precedes a failure 2. Recognize that state in the future and change something about the AI behavior AI might not win more often or act more intelligently, but won ’ t lose in the same way every time Keeps “ history from repeating itself ”
33
Artificial Intelligence: Pathfinding
34
PathPlannerApp Demo
35
Representing the Search Space Agents need to know where they can move Search space should represent either Clear routes that can be traversed Or the entire walkable surface Search space typically doesn ’ t represent: Small obstacles or moving objects Most common search space representations: Grids Waypoint graphs Navigation meshes
36
Grids 2D grids – intuitive world representation Works well for many games including some 3D games such as Warcraft III Each cell is flagged Passable or impassable Each object in the world can occupy one or more cells
37
Characteristics of Grids Fast look-up Easy access to neighboring cells Complete representation of the level
38
Waypoint Graph A waypoint graph specifies lines/routes that are “ safe ” for traversing Each line (or link) connects exactly two waypoints
39
Characteristics of Waypoint Graphs Waypoint node can be connected to any number of other waypoint nodes Waypoint graph can easily represent arbitrary 3D levels Can incorporate auxiliary information Such as ladders and jump pads Radius of the path
40
Navigation Meshes Combination of grids and waypoint graphs Every node of a navigation mesh represents a convex polygon (or area) As opposed to a single position in a waypoint node Advantage of convex polygon Any two points inside can be connected without crossing an edge of the polygon Navigation mesh can be thought of as a walkable surface
41
Navigation Meshes (continued)
42
Computational Geometry CGAL (Computational Geometry Algorithm Library) Find the closest phone Find the route from point A to B Convex hull
43
Example—No Rotation
44
Space Split
45
Resulted Path
46
Improvement
47
Example 2—With Rotation
48
Example 3—Visibility Graph
49
Random Trace Simple algorithm Agent moves towards goal If goal reached, then done If obstacle Trace around the obstacle clockwise or counter-clockwise (pick randomly) until free path towards goal Repeat procedure until goal reached
50
Random Trace (continued) How will Random Trace do on the following maps?
51
Random Trace Characteristics Not a complete algorithm Found paths are unlikely to be optimal Consumes very little memory
52
A* Pathfinding Directed search algorithm used for finding an optimal path through the game world Used knowledge about the destination to direct the search A* is regarded as the best Guaranteed to find a path if one exists Will find the optimal path Very efficient and fast
53
Understanding A* To understand A* First understand Breadth-First, Best-First, and Dijkstra algorithms These algorithms use nodes to represent candidate paths
54
Class Definition class PlannerNode { public: PlannerNode *m_pParent; int m_cellX, m_cellY;... }; The m_pParent member is used to chain nodes sequentially together to represent a path
55
Data Structures All of the following algorithms use two lists The open list The closed list Open list keeps track of promising nodes When a node is examined from open list Taken off open list and checked to see whether it has reached the goal If it has not reached the goal Used to create additional nodes Then placed on the closed list
56
Overall Structure of the Algorithms 1. Create start point node – push onto open list 2. While open list is not empty A. Pop node from open list (call it currentNode) B. If currentNode corresponds to goal, break from step 2 C. Create new nodes (successors nodes) for cells around currentNode and push them onto open list D. Put currentNode onto closed list
57
Breadth-First Finds a path from the start to the goal by examining the search space ply-by-ply
58
Breadth-First Characteristics Exhaustive search Systematic, but not clever Consumes substantial amount of CPU and memory Guarantees to find paths that have fewest number of nodes in them Not necessarily the shortest distance! Complete algorithm
59
Best-First Uses problem specific knowledge to speed up the search process Head straight for the goal Computes the distance of every node to the goal Uses the distance (or heuristic cost) as a priority value to determine the next node that should be brought out of the open list
60
Best-First (continued)
61
Situation where Best-First finds a suboptimal path
62
Best-First Characteristics Heuristic search Uses fewer resources than Breadth- First Tends to find good paths No guarantee to find most optimal path Complete algorithm
63
Dijkstra Disregards distance to goal Keeps track of the cost of every path No guessing Computes accumulated cost paid to reach a node from the start Uses the cost (called the given cost) as a priority value to determine the next node that should be brought out of the open list
64
Dijkstra Characteristics Exhaustive search At least as resource intensive as Breadth-First Always finds the most optimal path Complete algorithm
65
Example
66
A* Uses both heuristic cost and given cost to order the open list Final Cost = Given Cost + (Heuristic Cost * Heuristic Weight)
67
A* Characteristics Heuristic search On average, uses fewer resources than Dijkstra and Breadth-First Admissible heuristic guarantees it will find the most optimal path Complete algorithm
68
Example
69
Start Node and Costs F=G+H
70
First Move
71
Second Move
72
Cost Map
73
Path
74
Pathfinding with Constraints
75
More Example
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.