Steering Behaviors GAM 376 Robin Burke Winter 2008.

Slides:



Advertisements
Similar presentations
7.2. AI E NGINE AND S TEERING B EHAVIOUR I Design of an AI Engine and introduction to steering in game AI.
Advertisements

AI Pathfinding Representing the Search Space
Fuzzy Logic and Robot Control Mundhenk and Itti, 2007.
Forces and Motion Demonstrate and calculate how unbalanced forces change the speed or direction of an objects motion.
Forces and Newton’s Laws NOTES
CHAPTER 6 BASIC MANEUVERS.
Flocking and more.  NPC groups can move in cohesive groups not just independently ◦ Meadow of sheep grazing? ◦ Hunting flock of birds? ◦ Ants? Bees?
Lecture 4: Command and Behavior Fusion Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
Right and Left Turns.
Performing Basic Vehicle Maneuvers
Robotics applications of vision-based action selection Master Project Matteo de Giacomi.
Flocks, Herds, and Schools: A Distributed Behavioral Model By: Craig Reynolds Presented by: Stephanie Grosvenor.
1 CO Games Development 2 Week 22 Flocking Gareth Bellaby.
Optimizing Flocking Controllers using Gradient Descent
Florian Klein Flocking Cooperation with Limited Communication in Mobile Networks.
Behavioral Animation Procedural Animation Type?. Behavioral Animation Introduced by C. Reynolds (1987) Animating many things at one time –A group of the.
Kristen Gardner. Outline Swarm Intelligence Flocking Basic Steering Swarms Applications Animals Vehicles People.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Dynamic Path Planning, Flocking Spring 2012.
The Vector Field Histogram Erick Tryzelaar November 14, 2001 Robotic Motion Planning A Method Developed by J. Borenstein and Y. Koren.
Artificial Intelligence in Game Design Intelligent Decision Making and Decision Trees.
Artificial Intelligence in Game Design Introduction to Learning.
1Notes. 2 Building implicit surfaces  Simplest examples: a plane, a sphere  Can do unions and intersections with min and max  This works great for.
1Notes  Assignment 2 is out  Flocking references  Reynolds, “Flocks, Herds, and Schools…”, SIGGRAPH’87  Tu and Terzopoulos, “Artificial Fishes…”, SIGGRAPH’94.
John S Gero Agents – Agent Simulations AGENT-BASED SIMULATIONS.
CS274 Spring 01 Lecture 5 Copyright © Mark Meyer Lecture V Higher Level Motion Control CS274: Computer Animation and Simulation.
1cs426-winter-2008 Notes  Please read: C. Reynolds “Flocks, Herds, and Schools…” SIGGRAPH ‘87
Crowd Simulation Seminar ”Steering Behaviors For Autonomous Characters” By Craig W. Reynolds Rudi Bonfiglioli ( )
Steering Behaviors For Autonomous Characters
Bellringer Get out essentials and turn to homework Read article on your desk forces Write questions and answers in isn.
DAMN : A Distributed Architecture for Mobile Navigation Julio K. Rosenblatt Presented By: Chris Miles.
Games Programming III (TGP2281) – T1, 2010/2011 Movement AI John See 19, 26 Nov 2010.
Artificial Intelligence in Game Design Camera Control.
Chapter 5.4 Artificial Intelligence: Pathfinding.
Gravity Chapter 7. Newton  Noticed the moon followed a curved path  Knew a force was required to allow an object to follow a curved path  Therefore,
1 Constant Following Distance Simulations CS547 Final Project December 6, 1999 Jeremy Elson.
Raven Robin Burke GAM 376. Soccer standings Burke, 7 Ingebristen, 6 Buer, 6 Bukk, 6 Krishnaswamy, 4 Lobes, 3 Borys, 2 Rojas, 2 Bieneman, 2.
Autonomous Movement IMGD 4000 With material from: Millington and Funge, Artificial Intelligence for Games, Morgan Kaufmann 2009 (Chapter 3), Buckland,
Flow Fields Hao Li and Howard Hamilton. Motivation for Flow Fields Multiple AI algorithms in a computer game can produce conflicting results. The AI must.
lesson 3.3 STARTING, STOPPING, STEERING, AND TARGETING
Artificial Intelligence for Games Lecture 1 1 Minor Games Programming.
Math / Physics 101 GAM 376 Robin Burke Fall 2006.
Units to read: 14, 15, 16, 17,18. Mass and Inertia Mass is described by the amount of matter an object contains. This is different from weight – weight.
EXIT = Way Out Julian Dymacek April 29. Escape Panic Paper Dr. Dirk Helbing, Illes J. Farkas, Dr. Tamas Vicsek Point mass simulation Uses psychological.
Chapter 1 Introduction to Game AI April 11,
Adrian Treuille, Seth Cooper, Zoran Popović 2006 Walter Kerrebijn
Jumping, Climbing, and Tactical Reasoning Section 2.5 Tom Schaible CSE 497 – AI & Game Programming.
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
Chapter: Force and Newton’s Laws
Basic Steering of Game Agents Featuring Guest professors Stephen Sheneman & Michael Wilkens 1.
AI Evaluation David Nowell CIS 588 2/14/05 Baldur’s Gate.
Behavior Control of Virtual Vehicle
Particle Swarm Optimization † Spencer Vogel † This presentation contains cheesy graphics and animations and they will be awesome.
- Review velocity, acceleration and the conditions needed to cause acceleration - Observe and plot the change in acceleration of a rolling object with.
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
Newton’s Laws of Motion Newton has three laws that explain the way things move.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Simulating Crowds Simulating Dynamical Features of Escape Panic & Self-Organization Phenomena in Pedestrian Crowds Papers by Helbing.
CSCI 4310 Lecture 5: Steering Behaviors in Raven.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
Balanced & Unbalanced Forces ► More Force = more Acceleration ► More Mass = more Force needed! Newton’s Second Law of Motion says: “To move a mass, you.
Forces and Motion. What is Forces and Motion? Here’s a quick overview. Don’t worry, we’ll go over it piece by piece.
11/13/03CS679 - Fall Copyright Univ. of Wisconsin Last Time A* Improvements Hierarchical planning Pre-Planning.
Advanced & Group Steering
Balanced & Unbalanced Forces
Driving in City Traffic
Steering Behaviors GAM 376 Robin Burke Fall 2006.
Navigation In Dynamic Environment
CIS 487/587 Bruce R. Maxim UM-Dearborn
CIS 488/588 Bruce R. Maxim UM-Dearborn
lesson 3.3 STARTING, STOPPING, STEERING, AND TARGETING
Presentation transcript:

Steering Behaviors GAM 376 Robin Burke Winter 2008

Outline Steering Behaviors Theory Implementations Homework #3

Admin Homework #2 due today Homework #3 due 1/24

Syllabus Change Lab availability problems New lab days 1/29 2/12 3/4 3/11 3/19 (9:00-11:00) Finals slot

Subsumption Architecture How to implement robot navigation? first attempts were very, very slow robot had to totally understand its world before moving world changed while it moved Rod Brooks changed the game parallel decision making different concerns at different levels results unified later there is some evidence that the brain works like this

Subsumption Architecture II Brooks' levels reactive quick, instinctive stop when you get to the door executive automatic, sequential stick out your hand, turn knob, push door deliberative requires thought if knob doesn't turn, look for key if door doesn't push, try pulling

Subsumption Architecture III Very useful for thinking about game agents deliberative goal-driven behavior (Ch. 9) executive finite state machine reactive scripts (Ch. 6) steering behaviors Often agents are not too smart animals, monsters deliberative behavior not expected good reactive behaviors go a long way

Steering behaviors Tendencies of motion that produce useful (interesting, plausible, etc.) navigation activity by purely reactive means without extensive prediction Pioneering paper Reynolds, 1999 I am using his examples and animations

Examples I want the insect monsters to swarm at the player all at once, but not get in each other's way. I want the homing missile to track the ship and close in on it. I want the guards to wander around, but not get too far from the treasure and not too close to each other. I want pedestrians to cross the street, but avoid on-coming cars.

Steering behavior solution Write a mathematical rule that describes accelerations to be made in response to the state of the environment Example: "don't hit the wall" generate a backwards force inversely proportional to the proximity of the wall the closer you get, the more you will be pushed away if you're going really fast, you'll get closer to the wall, but you'll slow down smoothly

Steering forces Acceleration is caused by forces so we call these effects steering forces Forces are multiple and asymmetrical you can stop faster than you can accelerate it is hard to turn on a dime

Combining forces Behaviors can be combined by summing the forces that they produce Example: follow I want the spy to follow the general, but not too close two behaviors go to general's location creates a force pointing in his direction not too close a counter-force inverse proportion to distance where the forces balance is where spy will tend to stay

Seek / Flee I animation

Seek / Flee II The desired velocity is towards the target Calculate the steering force needed to turn the current velocity into that one P target – P current = H target V desired =Norm(H target )*V max F steer =V desired -V current Flee just takes the opposite heading

Arrive I Seek can overshoot it arrives at the target at Vmax Arrive aims to decelerate as it approaches animation

Arrive II Do the same calculation as seek but V desired is now a function of the distance V desired =(||H target ||/K)*Norm(H target )*V max when H target < K full speed far away, slower closer

Pursue / Evade I Suppose I want to after a moving thing rather than a fixed point if I steer to current location it will be gone animation

Evade Similarly It may not be effective to run away from attacker’s current position X

Pursue / Evade II The same as Seek, but now the position of the target is estimated into the future P' target = P target + V target * T toTarget This is "smarter" behavior aims to "cut off" the quarry Similarly for Evade flee the target's future position How to calculate time to target? hard to do this precisely estimate with time to target's current position T toTarget =P targe t / V max If you want to get fancy you can include turning time

Wander I We may want our agent to move randomly about selecting random heading and velocity looks jerky and unnatural What we want is random motion that has a certain smoothness animation

Wander II Solution is to put a circle in front of the agent pick a point on the circle head towards it move the point randomly Parameters Circle size Small circle means heading will vary less Jitter Larger means that the target point can move farther on the circle Wander distance The farther the circle is ahead of the agent, the greater the steering force associated with it

Obstacle avoidance I We always want our agents to avoid obstacles avoids stupidity animation

Obstacle avoidance II Basic idea project a box forward in the direction of motion think of the box as a "corridor of safety" as long as there are no obstacles in the box motion forward is safe To do this find all of the objects that are nearby too expensive to check everything ignore those that are behind you see if any of the obstacles overlap the box if none, charge ahead if several, find the closest one this is what we have to avoid

Obstacle avoidance III Steering force we want to turn away from the obstacle just enough to miss it we want to slow down so we have time to correct Need a steering force perpendicular to the agent's heading proportional to how far the obstacle protrudes into the detection box Need a braking force anti-parallel to agent's heading proportional to our proximity to obstacle

Wall avoidance I Seems like a special case of obstacle avoidance but it isn't a wall is a very large obstacle calculations involving its radius aren't very useful animation

Wall avoidance II Project lines in front of the agent whiskers If the whiskers touch a wall create a steering force proportional to the degree of penetration into the wall Reynold's uses a slightly different technique use a single whisker but move it around

Interpose "Cut that out, you two" the chaperone always tries to get in between two other agents Implement by creating a midpoint and try to arrive at it Demo

Hide Position a hiding agent so it is behind an obstacle relative to another seeker agent For each obstacle project a line from the seeker through all obstacles hiding positions are on the other side find the closest one and arrive to it Demo

Hide II Lots of tweaks possible avoid hiding positions in front of seeker Avoid "magic" hiding always knowing where the seeker is

Path Following It is easy to have an agent follow a path but it doesn't always look natural Simple implementation seek from point to point works if paths are line segments More complex create a tunnel around path and do wall following

Path Following II animation demo

Offset Pursuit Follow another agent at some position synchronized swimming, anyone? Calculate an offset behind the leader Try to arrive at this point demo

Group behaviors Behaviors that depend on a neighborhood of other agents Classic examples fish schooling birds flocking

Separation "Don't crowd" Basic idea generate a force based on the proximity of each other agent sum all of the vectors Result Each agent will move in the distance that takes it furthest from others Neighbors disperse from each other

Alignment "Stay in step" Basic idea keep an agent's heading aligned with its neighbors calculate the average heading and go that way Result the group moves in the same direction

Cohesion "Stay together" Basic idea opposite of separation generate a force towards the center of mass of neighbors Result group stays together

Combining these behaviors We get flocking different weights and parameters yield different effects animation demo

Implementation issues Combining behaviors each steering behavior outputs a force it is possible for the total force to exceed what an agent's acceleration capacity What to do?

Combination methods Simplest: Weighted truncated sum, weight the behaviors, add up, and truncate at max_force very tricky to get the weights right must do all of the calculations Better: Prioritization Evaluate behaviors in a predefined order obstacle avoidance first wander last Keep evaluating and adding until max_force reached Problem is getting the fixed priority right Cheaper: Prioritized dithering Associate a probability with each behavior probabilities sum to 1 That behavior will get its force applied a certain percentage of the time

Partitioning We want to calculate the neighbors of each agent if we look at all agents, n 2 operation if there are many, many agents, too slow Many techniques for speeding this up basic idea is to consider only those agents that could be neighbors carve up space and just look at the relevant bits Very important in other parts of game programming, too collision detection view rendering

Cell-space partition Cover space with a grid Maintain a list of agents in each cell not that expensive since it is just an x,y threshold test Calculate which grid cells could contain neighbors check only those agents in the effected cells O(n)

Smoothing Jitter occurs when behaviors switch in and out obstacle avoidance kicks in when objects is in detection box but other behaviors push back towards obstacle Solution average the heading over several updates

Homework #3 Sheep Create "leader following" behavior one agent designated as leader all others try to follow not crowd each other not get in leader's way meaning when in front turn away

Leader following

Next week Steering Behaviors Lab SimpleSoccer Game combining state machines and steering behaviors