Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.

Slides:



Advertisements
Similar presentations
Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,
Advertisements

Extracting cues to intention utilising animacy displays derived from human activity Phil McAleer & Frank E. Pollick Department of Psychology, University.
Intention Recognition in Autistic Spectrum Condition (ASC) using Video Recordings and their Corresponding Animacy Display. Phil McAleer 1, Lawrie McKay.
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
HUMAINE Summer School - September Basic Emotions from Body Movements HUMAINE Summer School 2006 Casa Paganini Genova, Italy Ahmad S. Shaarani The.
Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,
Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
Presented by Nikhil Mohan Narsapur 1ve07ec069.  Hawk-Eye is a used to track the path of the ball.  Hawk-Eye is a used to track the path of the ball.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Computer Vision for Interactive Computer Graphics Mrudang Rawal.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Augmented Reality: Object Tracking and Active Appearance Model
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
 For many years human being has been trying to recreate the complex mechanisms that human body forms & to copy or imitate human systems  As a result.
Wheeler Lower School Mathematics Program Grades 4-5 Goals: 1.For all students to become mathematically proficient 2.To prepare students for success in.
Technology and digital images. Objectives Describe how the characteristics and behaviors of white light allow us to see colored objects. Describe the.
Multimodal Interaction Dr. Mike Spann
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
3D COMPUTER GRAPHICS IMD Chapter 1: 3D Computer Graphics Chapter 1: 1 Lecturer: Norhayati Mohd Amin.
EyesWeb XMI Multimodal data recording, playing and analysis M. Mancini, Università di Genova (Italy)
Making Distance Judgements in Real and Virtual Environments: Does Order Make a Difference? Introduction Virtual environments are gaining widespread acceptance.
Surface Area and Volume
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
1 Use the colour and interactive animation in learning 3D vectors Presenters: Wei-Chih Hsu Professor : Ming-Puu Chen Date : 10/13/2008 Iskander, W. & Curtis,
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
Project # 9 Tracking-dependent and interactive video projection.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Interactive Sand Art Drawing Using RGB-D Sensor
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Creating visual interfaces in python
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
ECE 638: Principles of Digital Color Imaging Systems Lecture 3: Trichromatic theory of color.
Tracking-dependent and interactive video projection (Big Brother project)
CS COMPUTER GRAPHICS LABORATORY. LIST OF EXPERIMENTS 1.Implementation of Bresenhams Algorithm – Line, Circle, Ellipse. 2.Implementation of Line,
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Handle By, S.JENILA AP/IT
Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Blind Quality Assessment System for Multimedia Communications Using Tracing Watermarking P. Campisi, M. Carli, G. Giunta and A. Neri IEEE Transactions.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Lecture 5: 11/5/1435 Computer Animation Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
A GOAL-DIRECTED RATIONAL COMPONENT FOR EMOTIONAL AGENTS Antonio Camurri and Gualtiero Volpe DIST - University of Genova Italy 10/04/1999 " AFFECTIVE COMPUTING:
What is Multimedia Anyway? David Millard and Paul Lewis.
18. Perception Unit 3 - Neurobiology and Communication
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
 The Master Technology Teacher knows and applies basic strategies and techniques for using graphics and animation.
Visual Recognition of Human Movement Style Frank E. Pollick Department of Psychology University of Glasgow.
Visual Information Retrieval
Gait Recognition Gökhan ŞENGÜL.
Video Vocabulary Illustrated
Dynamical Statistical Shape Priors for Level Set Based Tracking
Video-based human motion recognition using 3D mocap data
Categorizing sex and identity from the biological motion of faces
Background Perception Animation - principles Animation - history
Prepared by: Engr . Syed Atir Iftikhar
Devil physics The baddest class on campus aP Physics
F. Y. B. A. G1: General Psychology (TERM I)
Motion-Based Mechanisms of Illusory Contour Synthesis
The Auditory Dimension
Presentation transcript:

Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios invovled a single person walking, jogging and dancing. A representation of Heider and Simmel (Nevarez & Scholl, 2000). 4 visual display conditions: Real, Body Silhouette, Pulsing Block, Block. 2 tasks: Free response & Self-propulsion rating 32 subjects used in a between design: 8 per condition. Creating Animacy Displays from Scenes of Human Action 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri, 1 Kirsty Smith, 1 Helena Paterson, 1 Frank E. Pollick 1 Department of Psychology, University of Glasgow, Glasgow, Scotland, 2 Infomus Lab, D.I.S.T, University of Genova, Genova, Italy Introduction Heider and Simmel (1944) showed that people, on viewing a simple animation involving geometric shapes (a disc, a large triangle and a small triangle), would attribute emotions and intentions to the shapes based on their movements. Further experiments have used the technique of varying simple mathematical relationships of the motion of the shapes. These experiments have shown that the attribution of animacy is largely due to changes in speed and direction of the shapes, rather than characteristic features. We introduce a new method to create animacy displays direct from actual human movements. We examined the perception of animacy using transformed displays of human actions, and it was our intent that this would allow new insights as to how visual cues lead to spontaneous use of animate terms and the attribution of social meaning. Conclusions Using this new technique for stimulus generation from real video footage, we were able to create an abstract display, involving geometric shapes, that resulted in animate terms being used to describe them. Viewpoint appears to influence the rating of self-propulsion, with displays from the top-view being rated higher than those from the side-view. References Camurri A., Mazzarino B. & Volpe G. Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library, in A. Camurri, G. Volpe (Eds.), Gesture-based Communication in Human-Computer Interaction, LNAI 2915, Springer Verlag. (2004). Heider, F., & Simmel, M., An Experimental Study of Apparent Behavior, American Journal of Psychology, 57:2, (1944). McAleer, P., Mazzarino, B., Volpe, G., Camurri, A., Smith, K., Paterson, H., Pollick, F.E., Perceiving Animacy and Arousal in Transformed Displays of Human Interaction. Proceedings for ISHF_MCM_2004, in press. Available from Nevarez, H.G., & Scholl, B.J., (2000). Judgements of Animacy The scenario depicted (right) resulted in the largest occurrence of animate terms, after the Heider and Simmel display. Stimulus Production 1. Real Video – original footage. 2.Body Silhouette – obtained by the removal of colour information and applying a background subtraction technique to the input video. 3.Pulsing Block(s) – movement of a person is represented by a rectangle, the size of which is related to the Quantity of Motion (QoM) as measured by algorithms included in the EyesWeb Expressive Gesture Processing Library. QoM is computed as the change in area of the person, in the silhouette format, from one frame to the next, summed on the last few frames (4 frames in this experiment). QoM can be assumed as a measure of the global amount of detected motion and it can be thought of as a first rough approximation of the physical momentum. 4.Block(s) – uses techniques for the tracking of the centre of mass of the silhouette image for each respective person. The dimensions of the block for each person in this condition were constant. The experiment involved creating four displays of each scene but with decreasing amounts of visual information available. The original footage was captured using a digital video camera. Next, through use of the EyesWeb open platform for multimedia production and motion analysis ( the four experimental conditions were created: First Frame Middle Frame Last Frame Top-view versus Side-view (McAleer, Mazzarino, Volpe, Camurri, Smith, Paterson, Pollick, 2004) Self-propulsion Rating Results Free Response Results for the Real Video condition and the Body Silhouette condition are not shown – all subjects rated them as being animate and as completely self-propelled. Methods & Results Subjects were asked to give a rating of self-propulsion for a series of 16 displays. Each display was shown from the top-view and the side-view. Only the Block display condition was used. Each subject saw each display 3 times. Side Top (1) Real (4) Block Example (1) Real (4) Block In contrast to the previous experiment, animacy displays have often been shown from the overhead perspective. We investigated if the perception of animacy is affected by the viewpoint of the display.