Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

Using Behavior Principles to Improve Quality of Services: Increasing Time-on-Task and Quality of Interactions in Ten Classrooms Serving Children with Low.
1 Synthetic animation of deaf signing Richard Kennaway University of East Anglia.
Expressive Gestures for NAO NAO TechDay, 13/06/2012, Paris Le Quoc Anh - Catherine Pelachaud CNRS, LTCI, Telecom-ParisTech, France.
人機介面 Gesture Recognition
Haptic Glove Hardware Graduation Project Prepared by Yaman A. Salman Eman M. Masarweh 2012.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Information complied by Andrea Bilello, M.Ed..  AAC includes equipment and services that enhance face-to-face communication and telecommunication. Writing.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Figure Animation.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
1Notes  Assignment 0 marks should be ready by tonight (hand back in class on Monday)
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
1 7M836 Animation & Rendering Animation Jakob Beetz Joran Jessurun
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
UNC Chapel Hill M. C. Lin Reading Assignments Principles of Traditional Animation Applied to 3D Computer Animation, by J. Lasseter, Proc. of ACM SIGGRAPH.
1 Expression Cloning Jung-yong Noh Ulrich Neumann Siggraph01.
Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University.
Animation. Outline  Key frame animation  Hierarchical animation  Inverse kinematics.
ASIMO. Want a robot to cook your dinner, Do your homework, Clean your house, Or get your groceries? Robots already do a lot of the jobs that we humans.
Animation. What is animation? The computer animation refers to any time sequence of visual changes in a scene. In addition to changing object position.
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Parkinson’s Test Device Development Erin Sikkel and Tiffany Feltman.
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Welcome Module 9 Working With Adults. Description Working effectively as a member of an adult team This means: Communicating effectively Helping others.
REAL ROBOTS. iCub It has a height of 100 cm, weighs 23 Kg, and is able to recognize and manipulate objects. Each hand has 9 DOF and can feel objects almost.
2.03B Common Types and Interface Devices and Systems of Virtual Reality 2.03 Explore virtual reality.
05/09/02(c) 2002 University of Wisconsin Last Time Global illumination algorithms Grades so far.
1 7M836 Animation & Rendering Animation Jakob Beetz Joran Jessurun
Definition of an Industrial Robot
Electronic visualization laboratory, university of illinois at chicago Designing an Expressive Avatar of a Real Person 9/20/2010 Sangyoon Lee, Gordon Carlson,
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Nonverbal Communication
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Graphical User Interfaces
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Real-Time Animation of Realistic Virtual Humans. 1. The 3D virtual player is controlled by the real people who has a HMD and many sensors people who has.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Computer Graphics 2 In the name of God. Outline Introduction Animation The most important senior groups Animation techniques Summary Walking, running,…examples.
WorkPackage1: A Smart Home for Disabled: Occupational Therapy Perspective Smarthome appliance controlling can be turned into occupational therapy.
Greta MPEG-4 compliant Script based behaviour generator system: Script based behaviour generator system: input - BML or APML input - BML or APML output.
Sign Language – 5 Parameters Sign Language has an internal structure. They can be broken down into smaller parts. These parts are called the PARAMETERS.
Behavioral Techniques in the Treatment of Selective Mutism
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
CSCE 441: Computer Graphics Forward/Inverse kinematics Jinxiang Chai.
Lecture 6: 18/5/1435 Computer Animation(2) Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
Character Setup In addition to rigging for character models, rigging artists are also responsible for setting up animation controls for anything that is.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
Perceptual Analysis of Talking Avatar Head Movements: A Quantitative Perspective Xiaohan Ma, Binh H. Le, and Zhigang Deng Department of Computer Science.
2.03 Explore virtual reality design and use.
UNC Chapel Hill M. C. Lin Basics of Motion Generation let X i = position,orient. of O i at t k = t 0,  i END = false while (not END) do display O i, 
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Ta-when-tee-won. Voice Paralanguage – vocal qualities “assist” language Disfluencies – disruptions in the flow of words – Verbal Junk (um, uh, like, and.
Gesture Recognition 12/3/2009.
Interactive Control of Avatars Animated with Human Motion Data By: Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, Nancy S. Pollard Presented.
1cs426-winter-2008 Notes. 2 Kinematics  The study of how things move  Usually boils down to describing the motion of articulated rigid figures Things.
Lecture 5: 11/5/1435 Computer Animation Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
Animation Animation is about bringing things to life Technically: –Generate a sequence of images that, when played one after the other, make things move.
UCL Human Representation in Immersive Space. UCL Human Representation in Immersive Space Body ChatSensing Z X Y Zr YrXr Real–Time Animation.
MIT Artificial Intelligence Laboratory — Research Directions Intelligent Perceptual Interfaces Trevor Darrell Eric Grimson.
CAPTURING OF MOVEMENT DURING MUSIC PERFORMANCE
Communication Disability
Pilar Orero, Spain Yoshikazu SEKI, Japan 2018
Prepared by: Engr . Syed Atir Iftikhar
Chapter I Introduction
Computer Graphics Lecture 15.
Lecture 3. Virtual Worlds : Representation,Creation and Simulation ( II ) 고려대학교 그래픽스 연구실.
Presentation transcript:

Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R. Cole Center for Spoken Language Research University of Colorado at Boulder, Campus Box 594 Boulder, Colorado ,USA,

CURRENT SITUATION The prevalence of disordered communication is particularly high in the one and one half million individuals diagnosed with ldiopathic Parkinson disease (IPD). There is a growing need for an accessible, inexpensive, effective treatment for the disordered communication in these individuals. At least 89% of these individuals have disordered speech and voice(Logemann et al., 1978), but only 3-4% (Hartelius et al., 1994) receive speech treatment. Our goal is to provide accessible, effective treatment to all who need it.

RESEARCH OBJECTIVE Develop a fully automated agent who interacts with a patient much like a human therapist. Create an animated therapist that will have the ability to increase the loudness and improve the quality and intelligibility of the patient’s speech. Develop a computer-based LSVT program, and demonstrate the feasibility of making LSVT treatment widely accessible through computer-based LSVT. Create tools that will run on off-the-shelf computer platforms, and thus be widely available to individuals with IPD for home use or through access to computers in clinics or public institutions. Create a computer-based LSVT program that will use engaging intelligent animated agents that are lifelike 3d characters. They will be designed to interact with individuals with IPD much like effective LSVT clinicians.

SYSTEM ARCHITECTURE

Patient’s behavior analysis component Audio: 1. Measure the volume of phonation 2. Measure the duration of phonation. Video: 1. Capture the patients' lip motion and show it on the screen. This will allow the patient to notice their mouth shape and how wide it opens during therapy. 2. Capture the patients head orientation through head tracking so that the agent will face the patient.

Synthesis component 1. Display the volume and pitch of the phonation in real time as the patient practices therapy. 2. Plot the parameter as a function of time to track how volume and pitch improve over time. 3. Build up a photo-realistic 3d virtual therapist model 4. Render this character model in 3d virtual environment. 5. Create the basic animation library for the treatment session, typical facial expressions, eye contact, head movements, hand gestures sequences. 6. Create the contextually appropriate gestures and speeches for the therapist and patient exchanges during the therapy according to patients’ phonation and pitch quality. 7. Record the real human voice database of the therapy.

Lori’s avatar

3D ANIMATED THRAPIST TECHNIQUES HEAD ANIMATION Visible speech Visible speech is produced by morphing between viseme targets. Facial expression To generate a large number of facial expressions, we enable independent control of separate facial components. Facial expression is controlled by 38 parameters using sliders in a dialog box. Head gesture Three types of head movements have been designed: head turning, head nodding and circular head movement Eye gesture Eye blink control and eye ball movement control, including circular eye ball movement. Smoothing facial expressions Three types of smoothing algorithms were designed to meet this requirement (1) An “easy in or easy out” algorithm, used to control animation speeds at different times so that the animation is more realistic; (2) Kochanek-Bartels cubic splines, in which three parameters such as tension, continuity and bias are used to produce smooth motion; (3) B-Splines.

BODY ANIMATION A parameter driven skeleton/bone model is used to generate lifelike gestures. The skeleton/bones are considered as rigid objects. Each bone is driven by the specific joints' rotation parameters referenced by the three rotating axes. The movement of the skeleton/bone is controlled by the pre-defined rotation parameters set for each joint.

Multi-level gesture description module Multi-level gesture description module has a three-level structure. The first level, the hand shape transcriber, is used to build the hand shape data. The second level, the sign transcriber, relies on the hand shape database and allows users to specify the location and motion of the two (left and right) arms. The third level, the animation transcriber, generates realistic animation sequences according to the target frames generated from the sign transcriber.

Hand shape transcriber Low-level parameter controller The skeleton comprises 22 degrees of freedom (DOF) in 15 joints of each hand, it provides direct and precise manipulation of each finger joint. High-level trajectory controller A set of commands are defined to describe one specific hand gestures using six trajectories: "spread", "bend", "hook", "separate" "yaw", "pitch“. Hand shape library A total of 20 basic hand shapes were selected for the primary hand shape library.

Body posture transcriber Built on top of the hand shape transcriber. Allows users to specify the body posture in terms of hand shape, location and orientation for both hands and arms. Body posture library. Hand gesture examples

Animation transcriber Enables users to define the animation speed and route as a specific sequence of key frames. One key frame is described by one particular body posture status. A cubic splines-based interpolation algorithm generates the animation sequence according to the hierarchical structure of the body. Animation library includes animation sequences such as bowing, "thumbs up," clapping and others.

CURRENT RESULTS REVIEW Designed a photo-realistic 3d character model of the real therapist-Professor Ramig. Videotaped 8 tapes of Dr. Ramig conducting LSVT therapy with four different patients. Collected the fruitful primary video materials of the treatment session. Received invaluable feedback from the patients about the benefits and barriers of using the animated characters as LSVT therapists. Implemented a working prototype of the seated LSVT therapist agent. Build up the basic database of the speech, facial expressions and upper body gestures of the animated LSVT therapist. Seated therapist