SceneMaker: Automatic Visualisation of Screenplays Eva Hanser Prof. Paul Mc Kevitt Dr. Tom Lunney Dr. Joan Condell School of Computing & Intelligent Systems.

Slides:



Advertisements
Similar presentations
Towards an Ontology for Describing Emotions 1 st World Summit of the Knowledge Society WSKS08 Juan Miguel López 1, Rosa Gil 1, Roberto García 1, Idoia.
Advertisements

GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
Design, prototyping and construction
Extraction and Visualisation of Emotion from News Articles Eva Hanser, Paul Mc Kevitt School of Computing & Intelligent Systems Faculty of Computing &
TeleMorph & TeleTuras: Bandwidth determined Mobile MultiModal Presentation Student: Anthony J. Solon Supervisors: Prof. Paul Mc Kevitt Kevin Curran School.
A Stepwise Modeling Approach for Individual Media Semantics Annett Mitschick, Klaus Meißner TU Dresden, Department of Computer Science, Multimedia Technology.
HOMER: A Creative Story Generation System Student: Dimitrios N. Konstantinou Supervisor: Prof. Paul Mc Kevitt School of Computing and Intelligent Systems.
Designing Multimedia with Fuzzy Logic Enrique Diaz de Leon * Rene V. Mayorga ** Paul D. Guild *** * ITESM, Guadalajara Campus, Mexico ** Faculty of Engineering,
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.
Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate.
MediaHub: An Intelligent Multimedia Distributed Hub Student: Glenn Campbell Supervisors: Dr. Tom Lunney Prof. Paul Mc Kevitt School of Computing and Intelligent.
PGNET, Liverpool JMU, June 2005 MediaHub: An Intelligent MultiMedia Distributed Platform Hub Glenn Campbell, Tom Lunney, Paul Mc Kevitt School of Computing.
RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA Paul Piwek, ITRI, Brighton Brigitte Krenn, OFAI, Vienna Marc Schröder,
UNIT-III By Mr. M. V. Nikum (B.E.I.T). Programming Language Lexical and Syntactic features of a programming Language are specified by its grammar Language:-
Topics Dr. Damian Schofield Director of Human Computer Interaction.
1 Affective Learning with an EEG Approach Xiaowei Li School of Information Science and Engineering, Lanzhou University, Lanzhou, China
1 Information Retrieval and Extraction 資訊檢索與擷取 Chia-Hui Chang, Assistant Professor Dept. of Computer Science & Information Engineering National Central.
Uses of Multimedia. What is Multimedia? The term multimedia is used to denote a combination of text, graphics, animation, sound and video.
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
Television Production Team. Standard 7.0 Standard Text: Exhibit knowledge of the television production team. Learning Goal: Students will be able to understand.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 1 21 July 2005.
Hfghfg 360-PlayLearn: Gamification and Game-Based Learning for Virtual Learning Environments on interactive television Gerard Downes1, Paul Mc Kevitt1,
CONFUCIUS: An Intelligent MultiMedia Storytelling Interpretation and Presentation System Minhua Eunice Ma Supervisor: Prof. Paul Mc Kevitt School of Computing.
AmbiLearn: an ambient intelligent multimodal learning environment for children Jennifer Hyndman Supervisors: Dr. Tom Lunney, Prof. Paul Mc Kevitt Intelligent.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Animating Virtual Humans in Intelligent Multimedia Storytelling Minhua Eunice Ma and Paul Mc Kevitt School of Computing and Intelligent Systems Faculty.
Lecture 12: 22/6/1435 Natural language processing Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Writing Movie Reviews. Pair Activity While watching the video, answer the following questions on a size 2: How did the two critics begin their review.
Steps Toward an AGI Roadmap Włodek Duch ( Google: W. Duch) AGI, Memphis, 1-2 March 2007 Roadmaps: A Ten Year Roadmap to Machines with Common Sense (Push.
Building character animation for intelligent storytelling with the H-Anim standard Minhua Eunice Ma and Paul Mc Kevitt School of Computing and Intelligent.
University of Dublin Trinity College Localisation and Personalisation: Dynamic Retrieval & Adaptation of Multi-lingual Multimedia Content Prof Vincent.
Parser-Driven Games Tool programming © Allan C. Milne Abertay University v
MediaHub: An Intelligent MultiMedia Distributed Platform Hub Glenn Campbell, Tom Lunney & Paul Mc Kevitt School of Computing and Intelligent Systems Faculty.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Entertainment Writing “Critical reporting”. “Critical Reporting” Critical reporting refers to the coverage of drama, music, art, and literature by print.
PAUL ALEXANDRU CHIRITA STEFANIA COSTACHE SIEGFRIED HANDSCHUH WOLFGANG NEJDL 1* L3S RESEARCH CENTER 2* NATIONAL UNIVERSITY OF IRELAND PROCEEDINGS OF THE.
SceneMaker: Multimodal Visualisation of Natural Language Film Scripts Dr. Minhua Eunice Ma School of Computing & Intelligent Systems Faculty of Computing.
CHAPTER TWO THE MAKING OF MULTIMEDIA: AND MULTIMEDIA DEVELOPMENT TEAM
One way to inspire or inform others is with a multimedia presentation, which combines sounds, visuals, and text.
Sign Language corpora for analysis, processing and evaluation A. Braffort, L. Bolot, E. Chételat-Pelé, A. Choisier, M. Delorme, M. Filhol, J. Segouat,
Object Orientated Data Topic 5: Multimedia Technology.
SceneMaker Intelligent Multimodal Visualisation of Natural Language Scripts Eva Hanser Dipl.-Des. (FH), M.Sc. Prof. Paul Mc Kevitt, Dr. Tom Lunney, Dr.
Ontologies and Lexical Semantic Networks, Their Editing and Browsing Pavel Smrž and Martin Povolný Faculty of Informatics,
CONFUCIUS: an Intelligent MultiMedia storytelling interpretation & presentation system Minhua Eunice Ma Supervisor: Prof. Paul Mc Kevitt School of Computing.
SceneMaker: Automatic Visualisation of Screenplays School of Computing & Intelligent Systems Faculty of Computing & Engineering University of Ulster, Magee,
A Reusable Scripting Engine for Automating Cinematics and Cut-Scenes in Video Games M. McLaughlin and M. Katchabaw Department of Computer Science The University.
1 MPML and SCREAM: Scripting the Bodies and Minds of Life-Like Characters Soft computing Laboratory Yonsei University October 27, 2004.
A Common Ground for Virtual Humans: Using an Ontology in a Natural Language Oriented Virtual Human Architecture Arno Hartholt (ICT), Thomas Russ (ISI),
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
Österreichisches Forschnungsinstitut für Artificial Intelligence Representational Lego for ECAs Brigitte Krenn.
A MBI L EARN Ambient Intelligent Multimodal Learning Environment for Children 100 day review December 2008 Jennifer Hyndman Supervisors: Dr. Tom Lunney,
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Intelligent MultiMedia Storytelling System (IMSS) - Automatic Generation of Animation From Natural Language Input By Eunice Ma Supervisor: Prof. Paul Mc.
PGNET, Liverpool JMU, June 2005 MediaHub: An Intelligent MultiMedia Distributed Platform Hub Glenn Campbell, Tom Lunney, Paul Mc Kevitt School of Computing.
IMSTD:Intelligent Multimedia System for teaching Databases By : NAZLIA OMAR Supervisors: Prof. Paul Mc Kevitt Dr. Paul Hanna School of Computing and Mathematical.
Preparing for the 2008 Beijing Olympics : The LingTour and KNOWLISTICS projects. MAO Yuhang, DING Xiao-Qing, NI Yang, LIN Shiuan-Sung, Laurence LIKFORMAN,
SEESCOASEESCOA SEESCOA Meeting Activities of LUC 9 May 2003.
VIRTUAL REALITY (VR) INTRODUCTION AND BASIC APPLICATIONS الواقع الافتراضي : مقدمة وتطبيقات Dr. Naji Shukri Alzaza Assist. Prof. of Mobile technology Dean.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents mid-term presentation.
Credit:  Multimedia has been used in many aspects in our lives, for example in the field of business, entertainment.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
A New Approach to Decision-making within an Intelligent MultiMedia Distributed Platform Hub Glenn Campbell, Tom Lunney, Aiden Mc Caughey, Paul Mc Kevitt.
Using Commonsense Reasoning to Improve Voice Recognition.
Visual Information Retrieval
Kenneth Baclawski et. al. PSB /11/7 Sa-Im Shin
Pilar Orero, Spain Yoshikazu SEKI, Japan 2018
Presentation transcript:

SceneMaker: Automatic Visualisation of Screenplays Eva Hanser Prof. Paul Mc Kevitt Dr. Tom Lunney Dr. Joan Condell School of Computing & Intelligent Systems Faculty of Computing & Engineering University of Ulster, Magee {p.mckevitt, tf.lunney,

PRESENTATION OUTLINE Aims & Objectives Related Projects SceneMaker Design and Implementation Relation to Other Work Conclusion and Future Work

AIMS Automatically generate affective virtual scenes from screenplays/play scripts Realistic visualisation of emotional aspects Enhance believability of virtual actors and scene presentation Multimodal representation with 3D animation, speech, audio and cinematography : AIMES & OBJECTIVES Input: Screen- play SceneMaker System Output: Animation

OBJECTIVES Emotions and semantic information from context Cognitive reasoning rules combined with commonsense and affective knowledge bases Automatic genre recognition from text Editing 3D content on mobile devices Design, implementation and evaluation of SceneMaker : AIMES & OBJECTIVES

Text layout analysis Semantic information on location, timing, props, actors, actions and manners, dialogue Parsing formal structure of screenplays (Choujaa and Dulay 2008) SEMANTIC TEXT PROCESSING : RELATED PROJECTS INT. M.I.T. HALLWAY -- NIGHT Lambeau and Tom come around a corner. His P.O.V. reveals a figure in silhouette blazing through the proof on the chalkboard. There is a mop and a bucket beside him. As Lambeau draws closer, reveal that the figure is Will, in his janitor's uniform. There is a look of intense concentration in his eyes. LAMBEAU Excuse me! WILL Oh, I'm sorry. LAMBEAU What're you doing? WILL (walking away) I'm sorry. Screenplay Extract from ‘Good Will Hunting (1997)’

Emotion models: Basic emotions (Ekman and Rosenberg 1997) Pleasure-Dominance-Arousal (Mehrabian 1997) OCC – appraisal rules (Ortony et al. 1988) Personality models: OCEAN (McCrae and John 1992) Belief-Desire-Intention (Bratman 1987) Emotion recognition from text: Keyword spotting, lexical affinity, statistical models, fuzzy logic rules, machine learning, common knowledge, cognitive model COMPUTATION OF EMOTION AND PERSONALITY

Scripting Notation for visual appearance of animated characters Various XML-based annotation languages: EMMA (EMMA 2003) BEAT (Cassel et al. 2001) MPML & MPML3D (Dohrn and Brügmann 2007) AffectML (Gebhard 2005) VISUAL AND EMOTIONAL SCRIPTING : RELATED PROJECTS

Automatic physical transformation and synchronisation of 3D model Manner influences intensity, scale, force, fluency and timing of an action Multimodal annotated affective video or motion captured data (Gunes and Piccardi 2006) MODELLING AFFECTIVE BEHAVIOUR AEOPSWORLD (Okada et al. 1999) Personality&Emotion Engine (Su et al. 2007) Greta (Pelachaud 2005) : RELATED PROJECTS

WordsEye – Scene composition (Coyne and Sproat 2001) ScriptViz – Screenplay visualisation (Liu and Leung 2006) CONFUCIUS – Action, speech & scene animation (Ma 2006) CAMEO – Cinematic and genre visualisation (Shim and Kang 2008) VISUALISING 3D SCENES WordsEye CONFUCIUSScriptVizCAMEO : RELATED PROJECTS

Emotional speech synthesis (Schröder 2001) - Prosody rules Music recommendation systems - Categorisation of rhythm, chords, tempo, melody, loudness and tonality - Sad or happy music and genre membership (Cano et al. 2005) - Associations between emotions and music (Kuo et al. 2005) AUDIO GENERATION : RELATED PROJECTS

Context consideration through natural language processing, commonsense knowledge and reasoning methods Fine grained emotion distinction with OCC Extract genre and moods from screenplays Influence on Visualisation Enhance naturalism and believability Text-to-animation software prototype, SceneMaker KEY OBJECTIVES : DESIGN AND IMPLEMENTATION

ARCHITECTURE OF SCENEMAKER Architecture of SceneMaker : DESIGN AND IMPLEMENTATION

Language processing module of CONFUCIUS Part-of-speech tagger, Functional Dependency Grammars, WordNet, LCS database, temporal relations, visual semantic ontology Extensions : Context and emotion reasoning : ConceptNet, Open Mind Common Sense (OMCS), Opinmind, WordNet-Affect Text pre-processing : Layout analysis tool with layout rules Genre-recognition tool with keyword co-occurrence, term frequency and dialogue/scene length SOFTWARE AND TOOLS : DESIGN AND IMPLEMENTATION

Visualisation module of CONFUCIUS H-Anim 3D models, VRML, media allocation, animation scheduling Extensions: Cinematic settings (EML), Affective animation models Media module of CONFUCIUS Speech Synthesis FreeTTS Extension: Automatic music selection User Interface for mobile and desktop VRML player, script writing tool, 3D editing SOFTWARE AND TOOLS CONT. : DESIGN AND IMPLEMENTATION

Evaluating 4 aspects of SceneMaker: EVALUATION OF SCENEMAKER AspectEvaluation Correctness of screenplay analysis & visual interpretation Hand-animating scenes Effectiveness of output scenes Existing feature film scenes Suitability for genre typeScenes of unknown scripts categorised by viewers Functionality of interfaceTesting with drama students and directors : DESIGN AND IMPLEMENTATION

RELATION TO OTHER WORK

Context reasoning to influence emotions requires commonsense knowledge bases and context memory Text layout analysis to access semantic information Visualisation from sentence, scene or full script Automatic genre specification Automatic development of personality, social status, narrative role and emotions POTENTIAL CONTRIBUTIONS RELATION TO OTHER WORK

Automatic visualisation of affective expression of screenplays/play scripts Heightened expressiveness, naturalness and artistic quality Assist directors, actors, drama students, script writers Focus on semantic interpretation, computational processing of emotions, reflecting affective behaviour and expressive multi-media scene composition Future work: Implementation of SceneMaker CONCLUSION AND FUTURE WORK

Thank you. QUESTIONS OR COMMENTS ?