Artificial Intelligence and Computer Games Dr. John Laird University of Michigan Modifications by Roger Webster, Ph.D. 03-28-2016
Different Practices of AI The study of “rational” behavior and processing. The study of human behavior and cognitive processing. The study of other approaches: neural, evolutionary. Computational models of component processes: knowledge bases, inference engines, search techniques, machine learning techniques. Understanding the connection between domains & techniques. Computational constraints vs. desired behavior. Application of techniques to real world problems.
Roles of AI in Games Opponents Teammates Strategic Opponents Support Characters Autonomous Characters Commentators Camera Control Plot and Story Guides/Directors
Goals of AI action game opponent Provide a challenging opponent Not always as challenging as a human -- Quake monsters. Not too challenging. Should not be superhuman in accuracy, precision, sensing, ... Should not be too predictable. Through randomness. Through multiple, fine-grained responses. Through adaptation and learning.
AI Agent in a Game Define an API for agents: sensing and acting. Each time through control loop, update each agent Encapsulate all agent data structures. And so agents can’t trash each other or the game. Share global data structures on maps, etc. Agent 1 Agent 2 Player Game
Structure of an Intelligent Agent Sensing: perceive features of the environment. Thinking: decide what action to take to achieve its goals, given the current situation and its knowledge. Acting: doing things in the world. Thinking has to make up for limitations in sensing and acting. The more accurate the models of sensing and acting, the more realistic the behavior.
Sensing Limitations & Complexities Limited sensor distance Limited field of view: Must point sensor at location and keep it on. Obstacles Complex room structures Detecting and computing paths to doors Noise in sensors Different sensors give different information and have different limitations. Sound: omni-directional, gives direction, distances, speech, ... Vision: limited field of view, 2 1/2D, color, texture, motion, ... Smell: omni-directional, chemical makeup.
Simple Behavior Random motion Simple pattern Tracking Just roll the dice to pick when and which direction to move Simple pattern Follow invisible tracks: Galaxians Tracking Pure Pursuit: Move toward agent’s current position Heat seeking missile Lead Pursuit: Move to position in front of agent Collision: Move toward where agent will be Weave: Every N seconds move X degree off opponent’s bearing Spiral: Head 90-M degrees off of opponent’s bearing Evasive – opposite of any tracking
Random
Simple Patterns
Pure Pursuit
Lead Pursuit
Collision
Moving in the World: Path Following Just try moving toward goal. Goal Source
Problem Goal Source
Create Avoidance Regions Goal Source
Indoors - Nodes Pickup Die Run Attack
Path Planning Find a path from one point to another using an internal model Satisfying: Try to find a good way to achieve a goal Optimizing: Try to find best way to achieve goal
Path Finding 3 5 10 4 7 2 2 6 7 3 5 1 7 3 5 6 8 3 2 2 1 2 2 2 2 5 4 6
Analysis Find the shortest path through a maze of rooms. Approach is A*: At each step, calculate the cost of each expanded path. Also calculate an estimate of remaining cost of path. Extend path with the lowest cost + estimate. Cost can be more than just distance: Climbing and swimming are harder (2x) Monster filled rooms are really bad (5x) Can add cost to turning – creates smoother paths But must be a numeric calculation. Must guarantee that estimate is not an overestimate. A* will always find shortest path.
Goals Exposure to AI on tactical decision making What’s missing? Not state-of-the-art AI, but relevant to Computer Games Concepts not code Analysis of strengths and weaknesses Pointers to more detailed references Enough to be “dangerous” What’s missing? Sensing models Path planning and spatial reasoning Scripting languages Teamwork, flocking, (see boids by Craig Reynolds www.red3d.com) Personality, emotion, … How all of these pieces fit together
Planning Variety of AI decision-making techniques: Finite-state Machines Decision Trees Neural Networks Genetic Algorithms Rule-based Systems Fuzzy Logic Planning Systems Context of a simple game scenario Implementation issues Evaluate their strengths and weaknesses
Types of Behavior to Capture Wander randomly if don’t see or hear an enemy. When see enemy, attack When hear an enemy, chase enemy When die, respawn When health is low and see an enemy, retreat
Execution Flow of an AI Engine Can be extremely expensive Can be modulated by “think” Sense Finite-state machines Decision trees Neural nets Fuzzy logic Rule-based systems Planning systems Environment Think Act
Finite State Machines John Laird and Michael van Lent University of Michigan AI Tactical Decision Making Techniques Modifications by Roger Webster, Ph.D.
Example FSM Events: E=Enemy Seen S=Sound Heard D=Die E -S Attack E, -D Chase S, -E, -D S D E Wander -E, -S, -D D -E Action (callback) performed when a transition occurs Code … If completely connected – don’t need FSM, could just use a case statement or switch statement. Spawn D
Example FSM Events: E=Enemy Seen S=Sound Heard D=Die Spawn D Wander -E, -S, -D -E E -S Attack E, -D Chase S, -E, -D S Events: E=Enemy Seen S=Sound Heard D=Die If completely connected – don’t need FSM, could just use a case statement or switch statement. Problem: No transition from attack to chase
Example FSM - Better Events: E=Enemy Seen S=Sound Heard D=Die Attack-S E, -D, S S -E -S Events: E=Enemy Seen S=Sound Heard D=Die S E -S Attack E, -D, -S -E Spawn D Wander -E, -S, -D Chase S, -E, -D If completely connected – don’t need FSM, could just use a case statement or switch statement.
Example FSM with Retreat Attack-ES E,-D,S,-L Retreat-S -E,-D,S,L Attack-E E,-D,-S,-L S L -S L -L E Events: E=Enemy Seen S=Sound Heard D=Die L=Low Health Each feature with N values can require N times as many states -E E -L Retreat-ES E,-D,S,L Wander-L -E,-D,-S,L E -L E L -S -L L S Retreat-E E,-D,-S,L Wander -E,-D,-S,-L -E -E E Assume is no arc, just stay in same state. How about if add select weapon – actions that are supposed to happen in every “state” such as pick best weapon – how get back to original state. If have details of wander, every state must be connected to all others. D D Chase -E,-D,S,-L D D Spawn D (-E,-S,-L) S
Extended FSM: Save Values Retreat L Attack E, -L L Events: E=Enemy Seen S=Sound Heard D=Die L=Low Health Maintain memory of current values of all events – transition event on old events -L -S E D L E -E E Chase S,-L Wander -E,-D,-S S D -E D D Spawn D (-E,-S,-L) S
Augmented FSM: Action on Transition Retreat L Attack E, -L L Action Events: E=Enemy Seen S=Sound Heard D=Die L=Low Health Execute action during transition -L -S E D L E -E E Chase S,-L Wander -E,-D,-S S D -E D D Spawn D (-E,-S,-L) S
Hierarchical FSM Expand a state into its own FSM Wander Attack E/-E Start Turn Right Go-through Door Pick-up Powerup Chase S/-S Die Spawn
Non-Deterministic Hierarchical FSM (Markov Model) Wander Attack No enemy Approach .3 .4 Aim & Slide Right & Shoot Aim & Slide Left & Shoot Start Aim & Jump & Shoot If two arcs are two, pick according to probability Die Start
Simple Implementation Compile into an array of state-name, event state-name := array[state-name] [event] Uses state-name to call execution logic Add buffers to queue up events in case get simultaneous events event state Hierarchical Create array for every FSM Have stack of states Classify events according to stack Update state which is sensitive to current event
Extended & Augmented Use C++ class for states Methods for actions and transitions
FSM Evaluation Advantages: Disadvantages: Very fast – one array access Can be compiled into compact data structure Dynamic memory: current state Static memory: state diagram – array implementation Can create tools so non-programmer can build behavior Non-deterministic FSM can make behavior unpredictable Disadvantages: Number of states can grow very fast Exponentially with number of events: s=2e Number of arcs can grow even faster: a=s2 Hard to encode complex memories Propositional representation Difficult to put in “pick up the better weapon”, attack the closest enemy
References Web references: www.gamasutra.com/features/19970601/build_brains_into_games.htm csr.uvic.ca/~mmania/machines/intro.htm www.erlang/se/documentation/doc-4.7.3/doc/design_principles/fsm.html www.microconsultants.com/tips/fsm/fsmartcl.htm Deloura, Game Programming Gems, Charles River Media, 2000, Section 3.0 & 3.1, pp. 221-248.
General References AI Russell and Norvig: Artificial Intelligence: A Modern Approach, Prentice Hall, 1995. Nilsson, Artificial Intelligence: A New Synthesis, Morgan Kaufmann, 1998. AI and Computer Games LaMothe: Tricks of the Windows Game Programming Gurus, SAMS, 1999, Chapter 12, pp. 713-796. Deloura, Game Programming Gems, Charles River Media, 2000, Section 3, pp. 219-350. www.gameai.com www.gamedev.net/