Human Figure Animation
Interacting Modules The ones identified –Speech, face, emotion Plus others: –Perception –Physiological states
Point of Departure: H-Anim Spec Why not view the body as an H-Anim MPEG-4 player? To some extent you can –Need to be able to describe body actions as H-Anim parameters if need be But the body usually needs to do more –Execute behaviors –Perform behaviors autonomously (e.g., breathe) –Perform simple arbitration between behaviors
Different Bodies with Different Capabilities We envision bodies with different degrees of autonomous control –Bodies executing behaviors in fixed ways –Bodies able to adjust their behavior according to the current emotions of the character Goal: use same interface, but behavior depends upon capabilities of the particular virtual human body –Possible mechanism: extensible markup languages
Principle: Full Access to State and Controls Body has access to high-level information (e.g., emotion, mood, planned future actions) if it needs it No architectural restrictions on information access Cognition can choose to control behavior at high level or low level
Example: Gesture Generation Body may need to know planned points of emphasis in the speech stream, so that it can time gestures to coincide Or defer responsibility to cognition if it is unable to do this
Example: Angry Steve Student pulls out dipstick while Steve is demonstrating another step Steve needs to decide whether to show anger to the student Cognition decides whether to reveal anger to the body
Interaction with Other Modules Body and face have parallel access to high level information