Download presentation
Presentation is loading. Please wait.
Published byJames Sullivan Modified over 9 years ago
1
SEESCOASEESCOA SEESCOA Meeting Activities of LUC 9 May 2003
2
SEESCOASEESCOA Contents Introduction Task Modelling GUI extension system Multi-modal interaction Tool Support Future Work Conclusion
3
SEESCOASEESCOA Introduction Validation of results from last year What is missing? What goes wrong? Scenario driven Extend where necessary Speech Interaction More control over GUI Better task model integration Better tool-support
4
SEESCOASEESCOA Task Modelling Task Model → ConcurTaskTrees (CTT, cfr. Previous deliverables) Task Model is annotated with SEESCOA UI Descriptions
5
SEESCOASEESCOA Task Modelling (2) CTT gives us Enabled Task Sets A set of tasks that are logically enabled to start their performance during the same period of time An ETS is the whole UI which has to be presented to the user at once CTT uses temporal operators These can be used to define the order between ETS A graph (= Dialog Model) can be constructed defining the sequence of User Interfaces necessary to complete the task Dialog Model can be presented by State Transition Network/Finite State Machine Better integration of Task Model with final presentation Automatic consistency between Task Model and expected User Interface
6
SEESCOASEESCOA Task Modelling (3) ETS1={Select Read SMS,Select Adjust Volume,Close, Shut Down} ETS2={Select SMS,Close,Shut Down} ETS3={Show SMS,Close,Shut Down} ETS4={Select Adjust Volume,Close,Shut Down}
7
SEESCOASEESCOA GUI Extension System Abstract UI → Rendering engine → GUI No control over appearance Designers want control over appearance Extension and customisation system is implemented Designer has control over the presentation of a widget, widget group and/or category of widgets
8
SEESCOASEESCOA GUI Extension System (2) Mapping rules expressed in XML Mappings can change the behaviour of the Rendering Engine
9
SEESCOASEESCOA GUI Extension System (3)
10
SEESCOASEESCOA A spoken representation of a (group of) widget(s) is heard as a response for an action (SpeechAction); -tag in XML UI specification specifies the grammar that is the result of an action Example: if there are 3 selected items in a list, these can be communicated by speech; Speech synthesis action can be triggered by speech recognition action Multi-modal interaction Speech synthesis
11
SEESCOASEESCOA Multi-modal interaction (2) Triggering actions by speech -tag in XML UI specification specifies the grammar that triggers the parent action; Problems: Only 2D GUIs where targetted by initial UI specification language → difficult to model and generate speech interaction; Unpredictable error rate (execution of unintended actions): handling recognition faults → undoing actions; Speech recognition
12
SEESCOASEESCOA Tool Support Designers need Tools There are no fully graphical tools for multi-device UI support Some experiments: Multi-device layout management specification tool Task Model Annotation tool Lacks integration with SEESCOA components
13
SEESCOASEESCOA Tool Support (2) Task Annotation Tool
14
SEESCOASEESCOA Tool Support (3) Multi-device Layout Specification
15
SEESCOASEESCOA Future work Further integration dialog model Investigation of context-dependency (towards CoDAMoS) Extension towards Web Services Integration with the SEESCOA component system
16
SEESCOASEESCOA Conclusion Several extensions and tools are implemented All different aspects are integrated in Dygimes Dygimes is a research framework for multi-device UI creation Supports methodology for multi-device UI creation Enables Model-based User Interface Design: integrates Task Model, Dialog Model, Presentation Model Automates User Interface generation from annotated Task Model (using automatic dialog model extraction) Focus on methodology for multi-device User Interface design Incomplete integration with Component System Academic value: 4 accepted publications (last 6 months)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.