Download presentation
Presentation is loading. Please wait.
Published byAustin Armstrong Modified over 8 years ago
1
GESTURE EXPRESSIVITY Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA 19104-6389 USA http://www.cis.upenn.edu/~badler Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA 19104-6389 USA http://www.cis.upenn.edu/~badler
2
May 7, 20032CASA 2003 Tutorial The “Realism Ceiling” for Human Models Visual Realism Time to createBehavioral Realism RealismSpecialeffectsInanimateobjectsLife-likevirtualhumans Real-timeAgents
3
May 7, 20033CASA 2003 Tutorial What Approaches Push the Curve Toward More Realism? Visual Realism Inanimateobjects MotionCapture Parameterization (face, gait, gesture) Life-likevirtualhumans Time to createBehavioral Realism Realism Real-time
4
May 7, 20034CASA 2003 Tutorial Why is Realism Still Hard? Visual Realism Inanimateobjects MotionCapture Parameterization (face, gait, gesture) Difficult to generalize What are the “right” parameters? Life-likevirtualhumans Time to createBehavioral Realism Realism Real-time
5
May 7, 20035CASA 2003 Tutorial Goal: Real-Time Interaction between Real and Virtual People Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires virtual person to sense state of live participants. Requires virtual person to sense state of live participants. What information is salient to both? What information is salient to both? Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires virtual person to sense state of live participants. Requires virtual person to sense state of live participants. What information is salient to both? What information is salient to both?
6
May 7, 20036CASA 2003 Tutorial Outline Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) EMOTE, eye movements, FacEMOTE EMOTE, eye movements, FacEMOTE Character consistency Character consistency Recognizing movement qualities Recognizing movement qualities The LiveActor environment The LiveActor environment Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) EMOTE, eye movements, FacEMOTE EMOTE, eye movements, FacEMOTE Character consistency Character consistency Recognizing movement qualities Recognizing movement qualities The LiveActor environment The LiveActor environment
7
May 7, 20037CASA 2003 Tutorial Parameterized Behaviors for Embodied Agents Use existing human behavior models as a guide. Use existing human behavior models as a guide. Want to drive embodied agent from internal state: goals, emotions, culture, motivation… Want to drive embodied agent from internal state: goals, emotions, culture, motivation… External behaviors help the observer perceive these internal [hidden] agent states. External behaviors help the observer perceive these internal [hidden] agent states. Use existing human behavior models as a guide. Use existing human behavior models as a guide. Want to drive embodied agent from internal state: goals, emotions, culture, motivation… Want to drive embodied agent from internal state: goals, emotions, culture, motivation… External behaviors help the observer perceive these internal [hidden] agent states. External behaviors help the observer perceive these internal [hidden] agent states.
8
May 7, 20038CASA 2003 Tutorial Human Movement Categories Voluntary (tasks, reach, look-at) Voluntary (tasks, reach, look-at) Dynamic (run, jump, fall) Dynamic (run, jump, fall) Involuntary (breathe, balance, blink) Involuntary (breathe, balance, blink) Subconscious Subconscious – Low level motor functions (fingers, legs, lips) – Communicative acts (facial expressions, limb gestures, body posture, eye movement) Voluntary (tasks, reach, look-at) Voluntary (tasks, reach, look-at) Dynamic (run, jump, fall) Dynamic (run, jump, fall) Involuntary (breathe, balance, blink) Involuntary (breathe, balance, blink) Subconscious Subconscious – Low level motor functions (fingers, legs, lips) – Communicative acts (facial expressions, limb gestures, body posture, eye movement)
9
May 7, 20039CASA 2003 Tutorial Parameterized Action Representation (PAR) Derived from BOTH Natural Language representation and animation requirements. Derived from BOTH Natural Language representation and animation requirements. Lets people instruct virtual agents. Lets people instruct virtual agents. May be associated with a process-based agent cognitive model. May be associated with a process-based agent cognitive model. Derived from BOTH Natural Language representation and animation requirements. Derived from BOTH Natural Language representation and animation requirements. Lets people instruct virtual agents. Lets people instruct virtual agents. May be associated with a process-based agent cognitive model. May be associated with a process-based agent cognitive model.
10
May 7, 200310CASA 2003 Tutorial Parameterized Action Representation (PAR) Language(NLP) SynthesizedActions PAR LanguageGenerationObservedActions InternalState Actionary Virtual agent
11
May 7, 200311CASA 2003 Tutorial PAR Examples Virtual Reality checkpoint trainer. Virtual Reality checkpoint trainer. Maintenance instruction validation. Maintenance instruction validation. The EMOTE motion qualities model. The EMOTE motion qualities model. Virtual Reality checkpoint trainer. Virtual Reality checkpoint trainer. Maintenance instruction validation. Maintenance instruction validation. The EMOTE motion qualities model. The EMOTE motion qualities model.
12
May 7, 200312CASA 2003 Tutorial Checkpoint Virtual Environment
13
May 7, 200313CASA 2003 Tutorial Maintenance Instruction Validation Example, F-22 Power Supply removal
14
May 7, 200314CASA 2003 Tutorial Instructions 1.Rotate the handle at the base of the unit. 2.Disconnect the 4 bottom electric connectors. 3.Disconnect the 5 top electric connectors. 4.Disconnect the 2 coolant lines. 5.Unbolt the 8 bolts retaining the power supply to the airframe and support it accordingly, and remove it. 1.Rotate the handle at the base of the unit. 2.Disconnect the 4 bottom electric connectors. 3.Disconnect the 5 top electric connectors. 4.Disconnect the 2 coolant lines. 5.Unbolt the 8 bolts retaining the power supply to the airframe and support it accordingly, and remove it.
15
May 7, 200315CASA 2003 Tutorial Executing the Corresponding PARs Eye view (Note attention Control) Instructions translated to PAR. PAR controls actions.
16
May 7, 200316CASA 2003 Tutorial EMOTE Motion Quality Model EMOTE: A real-time motion quality model. EMOTE: A real-time motion quality model. Based on Effort and Shape components of Laban Movement Analysis. Based on Effort and Shape components of Laban Movement Analysis. Defines of movement with 8 parameters. Defines of movement with 8 parameters. Controls numerous lower level parameters of an articulated figure. Controls numerous lower level parameters of an articulated figure. EMOTE: A real-time motion quality model. EMOTE: A real-time motion quality model. Based on Effort and Shape components of Laban Movement Analysis. Based on Effort and Shape components of Laban Movement Analysis. Defines of movement with 8 parameters. Defines of movement with 8 parameters. Controls numerous lower level parameters of an articulated figure. Controls numerous lower level parameters of an articulated figure.
17
May 7, 200317CASA 2003 Tutorial Effort Motion Factors Four factors range from an indulging extreme to a fighting extreme: Space: Indirect ------------------ Direct Space: Indirect ------------------ Direct Weight: Light --------------------- Strong Weight: Light --------------------- Strong Time: Sustained ------------- Sudden Time: Sustained ------------- Sudden Flow: Free --------------------- Bound Flow: Free --------------------- Bound Four factors range from an indulging extreme to a fighting extreme: Space: Indirect ------------------ Direct Space: Indirect ------------------ Direct Weight: Light --------------------- Strong Weight: Light --------------------- Strong Time: Sustained ------------- Sudden Time: Sustained ------------- Sudden Flow: Free --------------------- Bound Flow: Free --------------------- Bound
18
May 7, 200318CASA 2003 Tutorial Shape Motion Factors Four factors relating body movement to space: Sagittal: Advancing ----------- Retreating Sagittal: Advancing ----------- Retreating Vertical: Rising ----------------- Sinking Vertical: Rising ----------------- Sinking Horizontal: Spreading ----------- Enclosing Horizontal: Spreading ----------- Enclosing Flow: Growing -------------- Shrinking Flow: Growing -------------- Shrinking Four factors relating body movement to space: Sagittal: Advancing ----------- Retreating Sagittal: Advancing ----------- Retreating Vertical: Rising ----------------- Sinking Vertical: Rising ----------------- Sinking Horizontal: Spreading ----------- Enclosing Horizontal: Spreading ----------- Enclosing Flow: Growing -------------- Shrinking Flow: Growing -------------- Shrinking
19
May 7, 200319CASA 2003 Tutorial Inputs EMOTE Output Key Poses End Effector Goals Motion Capture Procedures Frame Rate Poses 4 Efforts 4 Shapes InverseKinematicsInterpolation
20
May 7, 200320CASA 2003 Tutorial Applying Effort to Arm Motions Effort parameters consist of values in [-1,+1] for Space, Weight, Time, and FlowEffort parameters consist of values in [-1,+1] for Space, Weight, Time, and Flow Translated into low-level movement parametersTranslated into low-level movement parameters Trajectory DefinitionTrajectory Definition Timing ControlTiming Control FlourishesFlourishes Effort parameters consist of values in [-1,+1] for Space, Weight, Time, and FlowEffort parameters consist of values in [-1,+1] for Space, Weight, Time, and Flow Translated into low-level movement parametersTranslated into low-level movement parameters Trajectory DefinitionTrajectory Definition Timing ControlTiming Control FlourishesFlourishes
21
May 7, 200321CASA 2003 Tutorial Applying Shape to Arm Motions Shape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensionsShape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensions For each dimension, we define an ellipse in the corresponding planeFor each dimension, we define an ellipse in the corresponding plane Parameter value specifies magnitude of movement of keypoint along ellipseParameter value specifies magnitude of movement of keypoint along ellipse Reach Space parameter moves keypoint away or toward body’s center of massReach Space parameter moves keypoint away or toward body’s center of mass Shape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensionsShape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensions For each dimension, we define an ellipse in the corresponding planeFor each dimension, we define an ellipse in the corresponding plane Parameter value specifies magnitude of movement of keypoint along ellipseParameter value specifies magnitude of movement of keypoint along ellipse Reach Space parameter moves keypoint away or toward body’s center of massReach Space parameter moves keypoint away or toward body’s center of mass
22
May 7, 200322CASA 2003 Tutorial EMOTE Controls other Performance Parameters Velocity Velocity Anticipation Anticipation Overshoot Overshoot Time exponent Time exponent # of frames multiplier # of frames multiplier Wrist bend multiplier Wrist bend multiplier Wrist extension mult. Wrist extension mult. Hand shape Hand shape Velocity Velocity Anticipation Anticipation Overshoot Overshoot Time exponent Time exponent # of frames multiplier # of frames multiplier Wrist bend multiplier Wrist bend multiplier Wrist extension mult. Wrist extension mult. Hand shape Hand shape Squash Limb volume Path curvature Displacement multiplier Elbow twist magnitude Wrist twist magnitude Elbow twist frequency Wrist twist frequency
23
May 7, 200323CASA 2003 Tutorial Motion Qualities are Significant Movements with EMOTE qualities give insight into the agent’s cognitive state. Movements with EMOTE qualities give insight into the agent’s cognitive state. When EMOTE qualities spread from limbs to body, movements appear more sincere. When EMOTE qualities spread from limbs to body, movements appear more sincere. Agent Observer Agent Observer Movements with EMOTE qualities give insight into the agent’s cognitive state. Movements with EMOTE qualities give insight into the agent’s cognitive state. When EMOTE qualities spread from limbs to body, movements appear more sincere. When EMOTE qualities spread from limbs to body, movements appear more sincere. Agent Observer Agent Observer ??? Inferred State Actions + EMOTE
24
May 7, 200324CASA 2003 Tutorial Start with Scripted Motions
25
May 7, 200325CASA 2003 Tutorial The Actor with Rising Efforts (Strong emotions?)
26
May 7, 200326CASA 2003 Tutorial The Actor with Neutral Efforts (A Politician?)
27
May 7, 200327CASA 2003 Tutorial Actor with Less Rising Shape (Not quite as excited?)
28
May 7, 200328CASA 2003 Tutorial Moving the Shapes Inward (Woody Allen?)
29
May 7, 200329CASA 2003 Tutorial With Light and Sustained Efforts (More solemn and serious?)
30
May 7, 200330CASA 2003 Tutorial Without Torso Movements (Used Car Salesman?)
31
May 7, 200331CASA 2003 Tutorial Hit the ball … forcefully. …softly. Application of EMOTE to Manner Variants (adverbs) in PAR: HIT
32
May 7, 200332CASA 2003 Tutorial Define Links between Gesture Selection and Agent Model Gesture performance cues agent state! Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. That’s why some synthetic characters appear bland and uninspired. That’s why some synthetic characters appear bland and uninspired. Gesture performance cues agent state! Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. That’s why some synthetic characters appear bland and uninspired. That’s why some synthetic characters appear bland and uninspired.
33
May 7, 200333CASA 2003 Tutorial Eye Saccade Generation: to Distinguish Various Agent States Face2Face Face2Face speaking, listening, thinking pre-process:run-time: pre-process:run-time: speaking, listening, thinking pre-process:run-time: pre-process:run-time:
34
May 7, 200334CASA 2003 Tutorial Eye Movements Modeled from Human Performance Data Source Eyes fixed ahead Eyes moved by statistical model Full MPEG-4 face
35
May 7, 200335CASA 2003 Tutorial Extending EMOTE to the Face: Inputs FacEMOTE Output MPEG-4 FAPs 4 Efforts 4 Shapes Action Unit Modifications MPEG-4 FAP stream
36
May 7, 200336CASA 2003 Tutorial FAPs Used 1.`open_jaw’ 2.‘lower_t_midlip’ 3.‘raise_b_midlip’ 4.‘stretch_cornerlip’ 1.`open_jaw’ 2.‘lower_t_midlip’ 3.‘raise_b_midlip’ 4.‘stretch_cornerlip’ 5. ‘raise_l_cornerlip’, 6. ‘close_t_l_eyelid’, 7. ‘raise_l_i_eyebrow’, 8. ‘squeeze_l_eyebrow’. 47 used out of 66. 47 used out of 66. (Not visemes and expressions; nor tongue, nose, ears, pupils.) (Not visemes and expressions; nor tongue, nose, ears, pupils.) 8 primary parameters; rest interpolated from these. 8 primary parameters; rest interpolated from these.
37
May 7, 200337CASA 2003 Tutorial Approach Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Maintain the overall structure of muscle actions (contracting or relaxing) but: Maintain the overall structure of muscle actions (contracting or relaxing) but: Change path/time of the muscle trajectory. Change path/time of the muscle trajectory. Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Maintain the overall structure of muscle actions (contracting or relaxing) but: Maintain the overall structure of muscle actions (contracting or relaxing) but: Change path/time of the muscle trajectory. Change path/time of the muscle trajectory.
38
May 7, 200338CASA 2003 Tutorial Main Steps Threshold local maxima and minima peaks as required by EMOTE parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters.
39
May 7, 200339CASA 2003 Tutorial Toward Character Consistency (very much in progress…) Apply a consistent set of parameters to: Arms (torso) – EMOTE Arms (torso) – EMOTE Face – FacEMOTE Face – FacEMOTE [Gait?] [Gait?] [Speech?!] [Speech?!] The following examples use gesture motion capture data and MPEG-4 face data The following examples use gesture motion capture data and MPEG-4 face data Apply a consistent set of parameters to: Arms (torso) – EMOTE Arms (torso) – EMOTE Face – FacEMOTE Face – FacEMOTE [Gait?] [Gait?] [Speech?!] [Speech?!] The following examples use gesture motion capture data and MPEG-4 face data The following examples use gesture motion capture data and MPEG-4 face data
40
May 7, 200340CASA 2003 Tutorial Body without Face Motion Body by Salim Zayat
41
May 7, 200341CASA 2003 Tutorial Face without Body Motion ‘Greta’ face courtesy Catherine Pelachaud
42
May 7, 200342CASA 2003 Tutorial Face and Body with EMOTE Body – indirect, widening; Face – indirect
43
May 7, 200343CASA 2003 Tutorial Mismatched Face and Body Qualities Body - indirect, light, free; Face - direct, heavy, bound
44
May 7, 200344CASA 2003 Tutorial Next Steps: Add torso shape changes and rotations. Add torso shape changes and rotations. Fix gesture timing and synchronization (use BEAT?). Fix gesture timing and synchronization (use BEAT?). Adapt EMOTE phrasing to speech prosody. Adapt EMOTE phrasing to speech prosody. Find a suitable speech generator. Find a suitable speech generator. Investigate role of body realism. Investigate role of body realism. Develop an evaluation methodology. Develop an evaluation methodology. Add torso shape changes and rotations. Add torso shape changes and rotations. Fix gesture timing and synchronization (use BEAT?). Fix gesture timing and synchronization (use BEAT?). Adapt EMOTE phrasing to speech prosody. Adapt EMOTE phrasing to speech prosody. Find a suitable speech generator. Find a suitable speech generator. Investigate role of body realism. Investigate role of body realism. Develop an evaluation methodology. Develop an evaluation methodology.
45
May 7, 200345CASA 2003 Tutorial Recognizing EMOTE Qualities Professional LMA notators can do it. Professional LMA notators can do it. Gives a “reading” of performer state. Gives a “reading” of performer state. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Professional LMA notators can do it. Professional LMA notators can do it. Gives a “reading” of performer state. Gives a “reading” of performer state. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Labeling EMOTE qualities in real-time would inform virtual agents of human state.
46
May 7, 200346CASA 2003 Tutorial Recognizing EMOTE Parameters in a Live Performance Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Ground truth from 2 LMA notators. Ground truth from 2 LMA notators. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Ground truth from 2 LMA notators. Ground truth from 2 LMA notators. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view.
47
May 7, 200347CASA 2003 Tutorial Experimental Results – Confusion Matrices (3D Motion Capture) Time Network Predicted SustainedNeutralSudden Sustained4440 Neutral000 Sudden0050 Weight Network PredictedLightNeutralStrong Light1202 Neutral000 Strong2018 Flow Network PredictedFreeNeutralBound Free2710 Neutral000 Bound2014 Space Network PredictedIndirectNeutralDirect Indirect1310 Neutral000 Direct0111
48
May 7, 200348CASA 2003 Tutorial Experimental Results – Confusion Matrices (2 Camera Video Input) Time Network Predicted SustainedNeutralSudden Sustained811 Neutral000 Sudden016 Weight Network PredictedLightNeutralStrong Light1211 Neutral000 Strong0210 Flow Network PredictedFreeNeutralBound Free720 Neutral000 Bound0211 Space Network PredictedIndirectNeutralDirect Indirect1320 Neutral000 Direct0110
49
May 7, 200349CASA 2003 Tutorial EMOTE Parameters Determined from A Single Camera View First- and second-order features are mostly preserved. Initial results are encouraging.
50
May 7, 200350CASA 2003 Tutorial LiveActor: Immersive Environment for Real + Virtual Player Interaction Based on Ascension ReActor IR real-time motion capture system.Based on Ascension ReActor IR real-time motion capture system. EON Reality stereo display wall.EON Reality stereo display wall. Emphasis on non-verbal communication channels.Emphasis on non-verbal communication channels. Based on Ascension ReActor IR real-time motion capture system.Based on Ascension ReActor IR real-time motion capture system. EON Reality stereo display wall.EON Reality stereo display wall. Emphasis on non-verbal communication channels.Emphasis on non-verbal communication channels.
51
May 7, 200351CASA 2003 Tutorial The LiveActor Environment
52
May 7, 200352CASA 2003 Tutorial Live Motion Capture
53
May 7, 200353CASA 2003 Tutorial Example Based on DI-Guy from BDI plus Motion Capture and Scripts
54
May 7, 200354CASA 2003 Tutorial Conclusion: A Plan for an Embodied Agent Architecture LiveActor environment for interactions between live and virtual agents.LiveActor environment for interactions between live and virtual agents. Embed qualities of real people into agents.Embed qualities of real people into agents. Sense actions and especially motion qualities of live player.Sense actions and especially motion qualities of live player. Link agent animation across body parts, multiple people, the environment, and the situation.Link agent animation across body parts, multiple people, the environment, and the situation. LiveActor environment for interactions between live and virtual agents.LiveActor environment for interactions between live and virtual agents. Embed qualities of real people into agents.Embed qualities of real people into agents. Sense actions and especially motion qualities of live player.Sense actions and especially motion qualities of live player. Link agent animation across body parts, multiple people, the environment, and the situation.Link agent animation across body parts, multiple people, the environment, and the situation.
55
May 7, 200355CASA 2003 Tutorial Acknowledgments http://www.cis.upenn.edu/~badler Martha Palmer, Aravind Joshi, Dimitris Metaxas; Jan Allbeck, Koji Ashida, Matt Beitler, Aaron Bloomfield, MeeRan Byun, Diane Chi, Monica Costa, Rama Bindiganavale, Neil Chatterjee, Charles Erignac, Karin Kipper, Seung-Joo Lee, Sooha Lee, Matt Leiker, Ying Liu, William Schuler, Harold Sun, Salim Zayat, and Liwei Zhao. Special thanks to Catherine Pelachaud and Eric Petajan. Sponsors: NSF, ONR, NASA, USAF, ARO, EDS, Alias- Wavefront, & Ascension Technologies Martha Palmer, Aravind Joshi, Dimitris Metaxas; Jan Allbeck, Koji Ashida, Matt Beitler, Aaron Bloomfield, MeeRan Byun, Diane Chi, Monica Costa, Rama Bindiganavale, Neil Chatterjee, Charles Erignac, Karin Kipper, Seung-Joo Lee, Sooha Lee, Matt Leiker, Ying Liu, William Schuler, Harold Sun, Salim Zayat, and Liwei Zhao. Special thanks to Catherine Pelachaud and Eric Petajan. Sponsors: NSF, ONR, NASA, USAF, ARO, EDS, Alias- Wavefront, & Ascension Technologies
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.