Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,

Slides:



Advertisements
Similar presentations
Extracting cues to intention utilising animacy displays derived from human activity Phil McAleer & Frank E. Pollick Department of Psychology, University.
Advertisements

Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.
Intention Recognition in Autistic Spectrum Condition (ASC) using Video Recordings and their Corresponding Animacy Display. Phil McAleer 1, Lawrie McKay.
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Cross-modal perception of motion- based visual-haptic stimuli Ian Oakley & Sile OModhrain Palpable Machines Research Group
HUMAINE Summer School - September Basic Emotions from Body Movements HUMAINE Summer School 2006 Casa Paganini Genova, Italy Ahmad S. Shaarani The.
Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,
1/12 Vision based rock, paper, scissors game Patrik Malm Standa Mikeš József Németh István Vincze.
Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
Presented by Nikhil Mohan Narsapur 1ve07ec069.  Hawk-Eye is a used to track the path of the ball.  Hawk-Eye is a used to track the path of the ball.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Computer Vision for Interactive Computer Graphics Mrudang Rawal.
Augmented Reality: Object Tracking and Active Appearance Model
Computer Animations of Molecular Vibration Michael McGuan and Robert M. Hanson Summer Research 2004 Department of Chemistry St. Olaf College Northfield,
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Electronic Visualization Laboratory University of Illinois at Chicago Interaction between Real and Virtual Humans: Playing Checkers R. Torre, S. Balcisoy.
Wheeler Lower School Mathematics Program Grades 4-5 Goals: 1.For all students to become mathematically proficient 2.To prepare students for success in.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
Yvonne M. Hansen Visualization for Thinking, Planning, and Problem Solving Simple, graphic shapes, the building blocks of a graphical language, play an.
Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 8 This presentation © 2004, MacAvon Media Productions Animation.
3D COMPUTER GRAPHICS IMD Chapter 1: 3D Computer Graphics Chapter 1: 1 Lecturer: Norhayati Mohd Amin.
EyesWeb XMI Multimodal data recording, playing and analysis M. Mancini, Università di Genova (Italy)
Chapter 7 Animation. The Power of Animation Animation grabs attention Transitions are simple forms of animation  Wipe  Zoom  Dissolve.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
 The creation of moving pictures one frame at a time Literally 'to bring to life' e.g. make a sequence of drawings on paper, in which a character's position.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
1 Use the colour and interactive animation in learning 3D vectors Presenters: Wei-Chih Hsu Professor : Ming-Puu Chen Date : 10/13/2008 Iskander, W. & Curtis,
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
Project # 9 Tracking-dependent and interactive video projection.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Interactive Sand Art Drawing Using RGB-D Sensor
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Creating visual interfaces in python
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
Tracking-dependent and interactive video projection (Big Brother project)
CS COMPUTER GRAPHICS LABORATORY. LIST OF EXPERIMENTS 1.Implementation of Bresenhams Algorithm – Line, Circle, Ellipse. 2.Implementation of Line,
Basic Presentation Techniques V Elements of Visual Aids  Images are pictorial elements such as line drawings, photographs, or continuous tone.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Handle By, S.JENILA AP/IT
Soon Joo Hyun Database Systems Research and Development Lab. US-KOREA Joint Workshop on Digital Library t Introduction ICU Information and Communication.
Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Blind Quality Assessment System for Multimedia Communications Using Tracing Watermarking P. Campisi, M. Carli, G. Giunta and A. Neri IEEE Transactions.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Lecture 5: 11/5/1435 Computer Animation Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
MMM2005The Chinese University of Hong Kong MMM2005 The Chinese University of Hong Kong 1 Video Summarization Using Mutual Reinforcement Principle and Shot.
A GOAL-DIRECTED RATIONAL COMPONENT FOR EMOTIONAL AGENTS Antonio Camurri and Gualtiero Volpe DIST - University of Genova Italy 10/04/1999 " AFFECTIVE COMPUTING:
What is Multimedia Anyway? David Millard and Paul Lewis.
Computer Engineering and Networks, College of Engineering, Majmaah University ANIMATION Mohammed Saleem Bhat CEN-318 Multimedia.
Computer Photography -Scene Fixed 陳立奇.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Visual Recognition of Human Movement Style Frank E. Pollick Department of Psychology University of Glasgow.
Visual Information Retrieval
Gait Recognition Gökhan ŞENGÜL.
Video Surveillance for Human Emotion Identification(VSHEI)
Dynamical Statistical Shape Priors for Level Set Based Tracking
Video-based human motion recognition using 3D mocap data
MOTION GRAPHICS AND COMPOSITING VIDEO
Background Perception Animation - principles Animation - history
Prepared by: Engr . Syed Atir Iftikhar
Computer Graphics Lecture 15.
Motion-Based Mechanisms of Illusory Contour Synthesis
The Auditory Dimension
Presentation transcript:

Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri, 1 Kirsty Smith, 1 Helena Paterson, 1 Frank E. Pollick 1 Department of Psychology, University of Glasgow, Glasgow, Scotland, 2 Infomus Lab, D.I.S.T, University of Genova, Genova, Italy Experiment 1a & 1b – Dance Scenarios Experiment 2 – Social Scenarios Conclusion Introduction Heider and Simmel (1944) showed that people, on viewing a simple animation involving geometric shapes (a disc, a large triangle and a small triangle), would attribute emotions and intentions to the shapes based on their movements. Further experiments have used the technique of varying simple mathematical relationships of the shapes. These experiments have shown that the attribution of animacy is largely due to changes in speed and direction of the shapes, rather than characteristic features. We introduce a new method to create animacy displays direct from actual human movements. We examined the perception of animacy using transformed displays of human interactions, and it was our intent that this would allow new insights as to how visual cues lead to spontaneous use of animate terms and the attribution of social meaning. Stimulus Production 1a. Stimuli & Design Footage depicts two men performing a dance routine. 4 experimental conditions: Real, Body Silhouette, Pulsing Block, Block. 36 subjects used in a between design: 9 per condition. 1b. Stimuli & Design Footage depicts one man performing a dance routine 4 experimental conditions: Real, Body Silhouette, Pulsing Block, Block. 36 subjects used in a between design: 9 per condition. 6 new scenarios were filmed: 4 involving interactions between two people, such as chasing, following, and circling; 2 involving a single person walking or jogging. Stimuli & Design 9 scenarios were used in total: 6 new scenarios, the scenarios from Experiments 1a & 1b, and a representation of Heider and Simmel (Nevarez & Scholl, 2000). 4 experimental conditions: Real, Body Silhouette, Pulsing Block, Block. 32 subjects used in a between design: 8 per condition. The scenario depicted (right) resulted in the largest occurrence of animate terms, after the Heider and Simmel display. Using this new technique for stimulus generation from real video footage, we were able to create an abstract display, involving geometric shapes, that resulted in animate terms being used to describe it. References Camurri A., Mazzarino B. & Volpe G. Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library, in A. Camurri, G. Volpe (Eds.), Gesture-based Communication in Human-Computer Interaction, LNAI 2915, Springer Verlag. (2004). Heider, F., & Simmel, M., An Experimental Study of Apparent Behavior, American Journal of Psychology, 57:2, (1944). McAleer, P., Mazzarino, B., Volpe, G., Camurri, A., Smith, K., Paterson, H., Pollick, F.E., Perceiving Animacy and Arousal in Transformed Displays of Human Interaction. Proceedings for ISHF_MCM_2004, in press. Available from Nevarez, H.G., & Scholl, B.J., (2000). Tasks & Results Subjects asked for a free response. Average on-line rating of emotional engagement using a slider. Tasks & Results Subjects asked for a free response. Self-propulsion rating. Average on-line rating of arousal using a slider. Tasks & Results Subjects asked for a free response. 1. Real Video – original footage. 2.Body Silhouette – obtained by the removal of colour information and applying a background subtraction technique to the input video. 3.Pulsing Block(s) – movement of a person is represented by a rectangle, the size of which is related to the Quantity of Motion (QoM) as measured by algorithms included in the EyesWeb Expressive Gesture Processing Library. QoM is computed as the change in area of the person, in the silhouette format, from one frame to the next, summed on the last few frames (4 frames in this experiment). QoM can be assumed as a measure of the global amount of detected motion and it can be thought of as a first rough approximation of the physical momentum. 4.Block(s) – uses techniques for the tracking of the centre of mass of the silhouette image for each respective person. The dimensions of the block for each person in this condition were constant. Each experiment involved creating four displays of the same scene but with decreasing amounts of visual information available. The original footage was captured using a digital video camera. Next, through use of the EyesWeb open platform for multimedia production and motion analysis ( the four experimental conditions were created: (McAleer, Mazzarino, Volpe, Camurri, Smith, Paterson, Pollick, 2004)