Fast Synthetic Vision, Memory, and Learning Models for Virtual Humans by J. Kuffner and J. Latombe presented by Teresa Miller CS 326 – Motion Planning Class 13
Goal: Have animated characters navigate autonomously in real-time virtual environments Character should be able to: Explore unknown environment Make a map in memory Plan a path to goal Deal with objects that can move, appear and disappear without warning
Synthetic Vision Many previous researchers have given their characters omnidirectional vision This is not realistic Other (slow) techniques include: Ray-casting Image-based motion energy techniques Pre-rendered models of objects + iterative pattern matching
Synthetic Vision Instead: Render an unlit model of the scene (flat shading) Color each object uniquely Use colors to obtain object information Virtual environment allows us to ignore many typical vision difficulties
Synthetic Vision Divide environment into small objects Give each a unique color/ID Render scene w/flat shading offscreen Scan image and extract data
Synthetic Vision – Examples
Memory Previous work used octrees This work stores: objID i = object ID P i = properties of object i (these are flexible) T i = 3D transformation of object i (position) v i = velocity of object i t = observation time
Memory M = set of observations V = results from looking at scene
Memory Updates memory if object moves
Memory Detects “obscured” vs. “missing” cases V M is is result of re-running vision module: is a set of objects expected to be seen
Learning and Forgetting Basic model Never forget anything Temporal model Forgets locations after awhile Logical model Forget locations of objects that have property of movability
Motion Planning Motion planning performed with “fast 2D path planner” Path following controller Cyclic motion capture data To show characters walking
Experimental Results