1 Layered Avatar Behavior Model and Script Language for the Object Interaction Jae-Kyung Kim 2007-08-10

Slides:



Advertisements
Similar presentations
Technical and design issues in implementation Dr. Mohamed Ally Director and Professor Centre for Distance Education Athabasca University Canada New Zealand.
Advertisements

ITEC113 Algorithms and Programming Techniques
Using the Semantic Web to Construct an Ontology- Based Repository for Software Patterns Scott Henninger Computer Science and Engineering University of.
Requirements Engineering n Elicit requirements from customer  Information and control needs, product function and behavior, overall product performance,
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
Algorithms and Problem Solving-1 Algorithms and Problem Solving.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Sharing Knowledge in Adaptive Learning Systems Miloš Kravčík Dragan Gašević Fraunhofer FIT, GermanySimon Fraser University, Canada
The Architecture Design Process
Algorithms and Problem Solving. Learn about problem solving skills Explore the algorithmic approach for problem solving Learn about algorithm development.
UML CASE Tool. ABSTRACT Domain analysis enables identifying families of applications and capturing their terminology in order to assist and guide system.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 1: Introduction to Decision Support Systems Decision Support.
1/1/ Designing an Ontology-based Intelligent Tutoring Agent with Instant Messaging Min-Yuh Day 1,2, Chun-Hung Lu 1,3, Jin-Tan David Yang 4, Guey-Fa Chiou.
About the Presentations The presentations cover the objectives found in the opening of each chapter. All chapter objectives are listed in the beginning.
1 An Introduction to Visual Basic Objectives Explain the history of programming languages Define the terminology used in object-oriented programming.
Chapter 10: Architectural Design
Introducing HTML & XHTML:. Goals  Understand hyperlinking  Understand how tags are formed and used.  Understand HTML as a markup language  Understand.
Design, goal of design, design process in SE context, Process of design – Quality guidelines and attributes Evolution of software design process – Procedural,
EQNet Travel Well Criteria.
User Interface Design Chapter 11. Objectives  Understand several fundamental user interface (UI) design principles.  Understand the process of UI design.
Semantic Web Technologies Lecture # 2 Faculty of Computer Science, IBA.
Chapter 9 Introduction to ActionScript 3.0. Chapter 9 Lessons 1.Understand ActionScript Work with instances of movie clip symbols 3.Use code snippets.
CASE Tools And Their Effect On Software Quality Peter Geddis – pxg07u.
New trends in Semantic Web Cagliari, December, 2nd, 2004 Using Standards in e-Learning Claude Moulin UMR CNRS 6599 Heudiasyc University of Compiègne (France)
Aurora: A Conceptual Model for Web-content Adaptation to Support the Universal Accessibility of Web-based Services Anita W. Huang, Neel Sundaresan Presented.
ITCS 6010 SALT. Speech Application Language Tags (SALT) Speech interface markup language Extension of HTML and other markup languages Adds speech and.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Levels and Scripts. Script Engine A symbol table, which contains all the symbols and information about type, scope, etc. A lexical analyzer, which is.
Chapter 11 Analysis Concepts and Principles
CHAPTER TEN AUTHORING.
XML and Digital Libraries M. Zubair Department of Computer Science Old Dominion University.
DELMIA DPM Assembly This is the Master “Presentation title” page. Type the title of your presentation in the "Presentation title” field. Cette page est.
Object Orientated Data Topic 5: Multimedia Technology.
Design engineering Vilnius The goal of design engineering is to produce a model that exhibits: firmness – a program should not have bugs that inhibit.
Discovering Computers 2009 Chapter 13 Programming Languages and Program Development.
PowerPoint Presentation for Dennis, Wixom, & Tegarden Systems Analysis and Design with UML, 3rd Edition Copyright © 2009 John Wiley & Sons, Inc. All rights.
FlexElink Winter presentation 26 February 2002 Flexible linking (and formatting) management software Hector Sanchez Universitat Jaume I Ing. Informatica.
1 5 Nov 2002 Risto Pohjonen, Juha-Pekka Tolvanen MetaCase Consulting AUTOMATED PRODUCTION OF FAMILY MEMBERS: LESSONS LEARNED.
Design Concepts By Deepika Chaudhary.
1 MPML and SCREAM: Scripting the Bodies and Minds of Life-Like Characters Soft computing Laboratory Yonsei University October 27, 2004.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
A Multi-agent Approach for the Integration of the Graphical and Intelligent Components of a Virtual Environment Rui Prada INESC-ID.
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
Abstract ESOLID is a computational geometry system that performs boundary evaluation using exact computation. Boundary Evaluation Exact computation Problem.
THE SUPPORTING ROLE OF ONTOLOGY IN A SIMULATION SYSTEM FOR COUNTERMEASURE EVALUATION Nelia Lombard DPSS, CSIR.
1 COSC 4406 Software Engineering COSC 4406 Software Engineering Haibin Zhu, Ph.D. Dept. of Computer Science and mathematics, Nipissing University, 100.
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
Österreichisches Forschnungsinstitut für Artificial Intelligence Representational Lego for ECAs Brigitte Krenn.
Data Structures and Algorithms Dr. Tehseen Zia Assistant Professor Dept. Computer Science and IT University of Sargodha Lecture 1.
Introduction to Interactive Media Interactive Media Tools: Authoring Applications.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
Digital Libraries1 David Rashty. Digital Libraries2 “A library is an arsenal of liberty” Anonymous.
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
Introduction to virtual engineering László Horváth Budapest Tech John von Neumann Faculty of Informatics Institute of Intelligent Engineering.
Introduction to Programming in Corvid EXSYS-Corvid is an intelligent systems programming environment General order of tasks: Enter and define Variables.
Systems Development Lifecycle
Program Design. Simple Program Design, Fourth Edition Chapter 1 2 Objectives In this chapter you will be able to: Describe the steps in the program development.
Algorithms and Problem Solving. Learn about problem solving skills Explore the algorithmic approach for problem solving Learn about algorithm development.
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
Copyright 2006 John Wiley & Sons, Inc Chapter 5 – Cognitive Engineering HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane Carey.
MDD-Kurs / MDA Cortex Brainware Consulting & Training GmbH Copyright © 2007 Cortex Brainware GmbH Bild 1Ver.: 1.0 How does intelligent functionality implemented.
Understanding the Value and Importance of Proper Data Documentation 5-1 At the conclusion of this module the participant will be able to List the seven.
 System Requirement Specification and System Planning.
Your Interactive Guide to the Digital World Discovering Computers 2012 Chapter 13 Computer Programs and Programming Languages.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
Algorithms and Problem Solving
Learning to Program D is for Digital.
Algorithms and Problem Solving
Presentation transcript:

1 Layered Avatar Behavior Model and Script Language for the Object Interaction Jae-Kyung Kim

2 Contents  Part I : Ph.D Degree Dissertation –Introduction –Related Works –Research Goals –Layered Avatar Behavior Model and Script Language Avatar-Object Interaction Model Layered Script Language Approach –Empirical Evaluations –Conclusion  Part II : Current Research Interests –Avatar Augmented Annotation Interface for e-Learning

3 Introduction  Recent studies on human character(Avatar) animation are mostly focused on realistic motion representation –ex) Facial Animation, synchronization between lips movement and speech text, hair animation, so on.  Creating human character motion by motion capture technique –Very useful for creating human motion, and low-level animation  Many researches about avatar motion control mainly care about realistic motion –Our main goal is to provide simple user interface by controlling avatar motion at high-level concept

4  Low-level Animation –Classic and basic control technique –Rotate & move every parts of avatar body –Depending on animator’s skill, any desired motion can be created in detail manner –However, it is very hard work for unskilled user to control and create avatar motion –3DS Max, Maya, Motion capture Related Works(1/2)

5 Related Works(2/2)  Avatar Script Languages –Represents avatar body and facial motion in language format (eg. XML) –Control synchronization and intensity of behaviors through parameters –Most scripts use pre-defined general behaviors –Difficulty in creating script due to complicated parameters and synchronization –Some scripts stick to specific format for an engine, possibly causing low compatibility –AML, CML, XSTEP, TVML, etc.

6 Research Goals & Motivation  Create animation scenario script with less effort and time in a specific application domain Pre-defined general behavior can’t support all application domains More abstract representation is required for authoring script  Explicitly separate user interface and animation engine for extensibility and compatibility Existing scripts support user interface and animation engine level at the same time Standard format and structure is needed

7 Layered Avatar Behavior Model and Script  Overall structure Task-Level Behavior Script Task-Level Behavior Script Avatar-Object Interaction Model Avatar-Object Interaction Model High-level Motion Script High-level Motion Script Primitive Motion Script Primitive Motion Script 1 st Translator (Task-to-High) 1 st Translator (Task-to-High) Geometric Information Geometric Information 2 nd Translator (High-to-Primitive) 2 nd Translator (High-to-Primitive) Primitive Motion Script Primitive Motion Script Avatar Low-level Animation Data Avatar Low-level Animation Data Object Low-level Animation Data Object Low-level Animation Data Avatar Low-level Animation Data Avatar Low-level Animation Data Object Low-level Animation Data Object Low-level Animation Data Application Context Menu Interface (Display) Context Menu Interface (Display) User Domain Interface Layer Motion Sequence Layer Recording Translating Application Layer

8 Avatar-Object Interaction Model(1/2)  Basic Object Structure –Proposed model has multiple objects below the ObjectList element –An object is composed of three elements Context : object state, access privilege, controlpoint and domain type ExecutableBehaviors : all executable behaviors for user interface menu MotionList : motion sequence to complete a behavior Executable Behaviors Object+ ObjectList ContextMotionList Basic XML DTD sturcture

Avatar-Object Interaction Model(2/2) 9 Avatar-Object Interaction Model Context Executable Behaviors MotionList Domain AcessGroup ControlPoint State User Behaviors Selection Task Planning Selectable Behaviors Behaviors List Motion List Select a Behavior Generating Motions …

10 Context Element(1/5)  State Element –Describes internal object states by pair of variable and value attributes –Variable has affective motion and current state methods Affective motion defines post-state value, and corresponding motion which changes the state Current state function is defined in XSLT script that processes state element to make boolean result output of current state value –Behavior menu and motion sequence are decided according to changes of state variables value

11 Affective Motion Post State closed State Functions IsClosed Variable : DoorState Event Motion Result State V V VV V Door Object InteractionState Changes  IF –(1) Event Motion = Affective Motion –(2) State Function = TRUE  THEN –(1) Make up of Variable list which satisfy conditions –(2) Set each variable Value to Post_State –(3) Output boolean result by State Function Value closed Motion Name close ‘close’ ‘True’ Context Element(2/5)

12 Context Element(3/5)  ControlPoint Element –Spatial reference points around a object where avatar stands and interact with it –Depending on the behaviors and object states, a object may have various points –A point is consisted of position, direction and contact elements –The elements has coarse references to represent spatial location at high-level Human readable Independent from physical geometric object models –Used for behavior sub-tasking and motion sequencing

13 Context Element(4/5) –Position Relative 5 basic and 4 composed positions for each objects Standard axis is –z for front on zy-plane –Direction Object relative 4 basic and composed directions to determine avatar’s orientation Rotation axis is y and zero degree for forward direction –Contact designate a specific part of the object when the avatar makes contact with the object The corners of the bounding box of 3D objects and the center point of the surface ElementReference Values Position Front, Behind, Left, Right, Center Left&Front, Right&Front, Left&Behind, Right&Behind Direction Backward, Forward, Left, Right Left&Forward, Right&forward, Left&Backward, Right&backward Contact Front, Behind, Left, Right, Top, Bottom, Center Left&Front&Center, Left&Front&Top, Left&Front&Bottom, Right&Front&Center …. Position Direction x z y Contact

14  Access Group Element –Object behavior can be accessed by several user types –Each user type has different purpose and usage –Each group has different access to behavior interface  DomainType Element –Not only internal state of object, but also external domain state can effect behavior interaction. –The same object can behave as a different semantic object e.g. A car object in traffic simulation domain and assembly training domain –DomainType element defines name of domain for checking current domain type. Context Element(5/5)

15 Layered Script Language  Overall structure Task-Level Behavior Script Task-Level Behavior Script Avatar-Object Interaction Model Avatar-Object Interaction Model High-level Motion Script High-level Motion Script Primitive Motion Script Primitive Motion Script 1 st Translator (Task-to-High) 1 st Translator (Task-to-High) Geometric Information Geometric Information 2 nd Translator (High-to-Primitive) 2 nd Translator (High-to-Primitive) Primitive Motion Script Primitive Motion Script Avatar Low-level Animation Data Avatar Low-level Animation Data Object Low-level Animation Data Object Low-level Animation Data Avatar Low-level Animation Data Avatar Low-level Animation Data Object Low-level Animation Data Object Low-level Animation Data Application Context Menu Interface (Display) Context Menu Interface (Display) User Domain Interface Layer Motion Sequence Layer Recording Translating Application Layer

16 Task-level Behavior Script(1/2)  Task-level Behavior Script –Presents object and avatar behaviors using XML DTD –Records and saves user action in temporal sequence –Extensible representation for behavior set Objects in a domain makeup the usable behavior set Cf) existing script which has pre-defined behavior DTD –Independent from rendering environments Doesn’t present appearance of avatar model, positional information of virtual objects, so on.

17 Task-level Behavior Script(2/2)  Definition of Task-level Avatar-Object Behavior Script –Template-based Avatar-Object Behavior Representation Domain behavior set is flexibly defined by object instances Task-level Behavior = (Object, Executable-Behavior, Narration) Object Behavior Narration Task-level Behavior Script = DoorEnter Text Highlight Computer Prev.page Screen Point Computer Next page Hello I’m... Domain Object Pool User Selection User Text Typing

18 High-level Motion Script(1/2)  High-level Motion Script –Bridges between task-level script and primitive motion script –Independent from both interface domain and application domain –Parameterized motion support Synchronization, speed, target, repeat, intensity, decay, etc Abstract values rather than physical

19 High-level Motion Script(2/2)  Structure of high-level script DTD

Primitive Motion Script  Primitive motion –Supported by rendering engine or motion library –Represented in physical parameter values ex) Geometric values such as coordinates of avatar and objects, radian value of avatar direction, etc –Parameters are transmitted to animation engine to play animation scenario on screen

21  Task-level Behavior Script Translator –Translates task-level behavior script into high-level motion script –Consisted of motion sequences to perform task-level behaviors Motion planning by task planner and generate the sequences ex) –Uses logical application domain knowledge –According to formal translation model, script of each domains is converted to a high-level motion script Script Translators(1/4)

22 Script Translators(2/4)  Formal Task Translation Model –Identification of target find location of target object(L o ) –Locomotive motion (M l ) identification of present avatar location (L a ) spatial distance between L a and L o generate L m, if L a ≠ L o –Manipulative motion (M m ) generate M m for L o –Verbal information (V i ) generate verbal speech for avatar behavior if available Speed and intensity parameters parameterize M m and M l for speed, intensity and duration

23 Script Translators(3/4) –Procedure modules Spatial Information Generator Spatial Information Generator Temporal Information Generator Temporal Information Generator Locomotion Generator Locomotion Generator Intensity Controller Intensity Controller Task-level Behavior Task-level Script Translator Ex) task-level behavior script Let's do it next time! High-level Motion Ex) Generated high-level motion students normal default normal default normal Let's do it next time!

24  High-level Motion Script Translator –Major module : Object geometric information analyzer Calculates location, size, and direction of virtual objects Current status of avatar position and posture –Translates abstract parameters of high-level script to physical values Script Translators(4/4) Object Geometry Analyzer Object Geometry Analyzer Virtual Space Object Geometry High-level Motion Script High-level Motion Trasnlator High-level Motion Trasnlator Primitive Motion Script this is …

25 System Implementation(1/2)  Creating a scenario script

26 System Implementation(2/2)  Running a scenario script in different environments –Different physical properties of virtual objects –Applying same task-level script to other applications OpenGL Application 2D Web Application(MSAgent)

Empirical Evaluation(1/3)  23 Users were asked to do the 20 tasks pre-defined in video material and text handouts  Features –Time to finish each test –Ratio of achieved tasks  Groups and Systems –G1 : Novice users(12) –G2 : Expert users(11) –S1 : Proposed method –S2 : Proposed method w/o context element –S3 : Alice system(comparatively lower behavior representation)  Analysis –One-way ANOVA (analysis of variance)

Empirical Evaluation(2/3)  Results Group 1 (Novice) Group 2 (Expert)

Empirical Evaluation(3/3)  Lesson Learned –Using abstract task-level behavior, both a novice and expert user achieved high accuracy and fast scripting performance. Expert users made the better result in Alice system than novice users –Expert users showed a tradeoff in Alice system Those of who spent much time made almost same task achievement ratio to the proposed system However, the proposed system could save much time –Both group answered that context behavior menu helped in creating scenario Novice user prefers abstract behavior, while some expert user tends to prefer primitive motion for detail manipulation

30 Conclusion  Conclusion –The proposed method consisted of 3 layered script languages and object interaction model –Avatar-object task-level behavior, abstracts from low-level concept to make script easier –By separating interface and application, the script can be reused in various environments –Novice user can create scenario with less effort and prefer the proposed method

Current Research Interests : Avatar Augmented Annotation Interface for e-Learning  Introduction –E-Learning, web-based teaching (WBT), and distance learning learn in their own spaces and times –Interactions between students and teachers provide more educational effect Without interactions, students become bored with online coursework –Interaction techniques Video and animation clips Animated Pedagogical Agent Annotations(Digital Ink) 31

Current Interests : Avatar Augmented Annotation Interface for e-Learning  Animated Pedagogical Agent –The main advantage of classroom instruction over distance education Persona Effect Gestures performed in live instruction –Many systems have been studied CTA, MASH, Wizlow, PPP, Steve, and etc –Creating these avatar-augmented presentations Usually laborious and time consuming Easy and intuitive authoring interface technique is needed 32

Current Interests : Avatar Augmented Annotation Interface for e-Learning  Annotation (digital inking) –an important feature of a digital lecture or presentation system –Annotation or Digital inking can capture the knowledge between student and teacher and easily share and reuse material –AnnotatED, OATS, Xlibris, Intelligent Pen, etc 33

Current Research Interests : Avatar Augmented Annotation Interface for e-Learning  Avatar augmented annotation –Enhances learning effects by integrating these two functionalities : avatar and annotation –To author animated lecture contents –To mark up the presentation using freehand annotations Reduce costs of controlling avatar animation Author and user created narrative scenarios 34 Integrating Framework Freehand Annotation Interface Avatar Animation Scenario Author/User End-User

 Overall structure of the proposed system Context Model Free-Hand Gesture Menu Animation Model Scenario Script Original Web Document Animated Content Motion Library Annotation Model Author’s Interface User’s Interface Current Research Interests : Avatar Augmented Annotation Interface for e-Learning

 Main components –Annotation model Recognize free-hand annotation types Menu-based annotation types –Context model Structure information of web documents –Annotated position in Xlink/Xpath –HTML/XML tag information of annotated area –Animation model Animation list Animation decision rules –Assign animation to annotation, according to each annotation types defined in annotation and context models 36

 Summary –Still at the early proposal stage Working on defining the models and implementing interfaces Continue it as an individual research during post-doc period –Considering co-work with other related systems Applying pen-based interface to OATS or AnnotatED Agent interface for semantic navigation or recommendation system 37 Current Research Interests : Avatar Augmented Annotation Interface for e-Learning