Download presentation
Presentation is loading. Please wait.
Published byRosamond Pierce Modified over 9 years ago
1
User Interface Agents Roope Raisamo (rr@cs.uta.fi)rr@cs.uta.fi Department of Computer and Information Sciences University of Tampere http://www.cs.uta.fi/~rr/
2
User Interface Agents n A user interface agent guides and helps the user –Many user interface agents observe the activities of the user and suggest better ways for carrying out the same operations –They can also automate a series of operations based on observing the users n Many user interface agents are based on the principles of programming by example (PBE)
3
Two examples of user interface agents: Eager Letizia
4
Eager – automated macro generator Allen Cypher, 1991 http://www.acypher.com/Eager/ n Observes the activities of the user and tries to detect repeating sequences of actions. When such a sequence is detected, offers a possibility to automate that task. n like an automated macro generator n this kind of functionality is still not a part of common applications, even if it could be.
5
Eager Eager n Eager observes repeating sequences of actions n When Eager finds one, it jumps on the screen and suggests the next phase
6
Eager Eager n When all the phases suggested by Eager have been shown and accepted, the user can give Eager the permission to carry out the automated task.
7
Letizia – a browser companion agent n Letizia observes the user and tries to preload interesting web pages at the same time as the user browses through the web Letizia
8
Letizia
9
Letizia Traditional browsing leads the user into doing a depth first search of the Web Letizia conducts a concurrent breadth-first search rooted from the user's current position
10
The appearence of agents
11
The appearance of an agent n The appearance of an agent is a very important feature when a user tries to find out what some agent can do. n It is a bad mistake to use such an appearance that the user thinks an agent to be more intelligent than it really is. n The appearance must not be disturbing.
12
Computer-generated talking head n one of the most demanding forms of agent presentation n a human head suggests the agent to be rather intelligent n a talking head probably is the most natural way to present an agent in a conversational user interface.
13
Drawn or animated characters n the apperance has a great effect on the expectations of the user –a paper clip vs. a dog vs. Merlin the Sorceror n Continuously animated, slowly changing or static presentation
14
Textual presentation n Textual feedback of the actions of an agent –Concerning textual user interfaces we usually should avoid textual input if it is not a part of the main task that the agent is observing. n Chatterbots –e.g., Julia that is a user in a MUD world. It can also answer to questions concerning this world. http://lcs.www.media.mit.edu/people/foner/Yenta/ julia.html http://lcs.www.media.mit.edu/people/foner/Yenta/ julia.html –so called NPCs (non-person characters) in multiplayer role-playing computer games.
15
Auditory presentation n An agent can also be presented only by voice or sound, the auditory channel –ambient sound –beeps, signals –melodies, music –recorded speech –synthetic speech
16
Haptic presentation n In addition to auditory channel, or to replace it an agent can present information by haptic feedback n Haptic simulation modalities –force and position –tactile –vibration –thermal –electrical
17
Haptic output devices Inexpensive devices: –The most common haptic devices are still the different force-feedback controllers used in computer games, for example force-feedback joysticks and wheels. –In 1999 Immersion Corporation’s force feedback mouse was introduced as Logitech Wingman Force Feedback Gaming Mouse –In 2000 Immersion Corporation’s tactile feedback mouse was introduced as Logitech iFeel Tactile Feedback Mouse
18
Haptic output devices More sophisticated devices: –SensAble Technologies: PHANTOM –Immersion Corporation: Impulse Engine –Often very expensive, and non-ergonomic. VTi CyberForce Impulse Engine 2000 VTi CyberTouch PHANTOM
19
No direct presentation at all n An agent helps the user by carrying out different supporting actions –e.g., prefetching needed information, automatic hard disk management, … n An indirectly controlled background agent –question: How to implement this indirect control? –multisensory input: the agent is observing a system, an environment, or the user
20
Related user interface metaphors: Conversational User Interface Multimodal User Interface
21
Conversational User Interfaces n Why conversation? –a natural way of communication –learnt at quite a young age –tries to fix the problems of a direct manipulation user interface n Conversation augments, not necessarily replaces a traditional user interface –the failure of Microsoft Bob –Microsoft Office Assistant
22
Microsoft Office Assistant n Office assistant tries to help in the use of Microsoft Office programs with a variable rate of success. n The user can choose the appearance of the agent –unfortunately, this has no effect on the capabilities of the agent n A paper clip most likely is a better presentation for the current assistant than a Merlin character.
23
Multimodal User Interfaces n ”Multimodal interfaces combine many simultaneous input modalities and may present the information using synergistic representation of many different output modalities” [Raisamo, 1999]
24
Multimodal User Interfaces n An agent makes use of multimodality when observing the user: –speech recognition n reacts on speech commands, or observes the user without requiring actual commands –machine vision, pattern recognition: n recognizing facial gestures n recognizing gaze direction n recognizing gestures
25
Multimodal User Interfaces n a specific problem in multimodal interaction is to combine the simultaneous inputs. –this requires a certain amount of task knowledge and ”intelligence” –this way every multimodal user interface is at least in some respect a user interface agent that tries to find out what the user wants based on the available information
26
A high-level architecture for multimodal user interfaces Adapted from [Maybury and Wahlster, 1998]
27
Modeling [Nigay and Coutaz, 1993]
28
Put – That – There [Bolt, 1980]
29
Example: Digital Smart Kiosk n Smart Kiosk was a research project at Compaq-Digital Cambridge Research Laboratory in which an easy-to-use information kiosk has been built to be used by all people n Combines new technology: –machine vision, pattern recognition –speech synthesis (DECtalk) –speech recognition –animated talking head (DECface) [Christian and Avery, 1998]
30
Example: Digital Smart Kiosk Vision DECface Netscape Navigator Active vision zone Touchscreen
31
Example: Digital Smart Kiosk
34
References [Bolt, 1980] Richard A. Bolt, Put-that-there. SIGGRAPH ‘80 Conference Proceedings, ACM Press, 1980, 262-270. [Christian and Avery, 1998] Andrew D. Christian and Brian L. Avery, Digital Smart Kiosk project. Human Factors in Computing Systems, CHI ’98 Conference Proceedings, ACM Press, 1998, 155-162. [Nigay and Coutaz, 1993] Laurence Nigay and Joëlle Coutaz, A design space for multimodal systems: concurrent processing and data fusion. Human Factors in Computing Systems, INTERCHI ’93 Conference Proceedings, ACM Press, 1993, 172-178. [Raisamo, 1999] Roope Raisamo, Multimodal Human-Computer Interaction: a constructive and empirical study. Ph.D. dissertation. Report A-1999-13, Department of Computer Science, University of Tampere. http://granum.uta.fi/pdf/951-44-4702-6.pdf [Maybury and Wahlster, 1998] Mark T. Maybury and Wolfgang Wahlster (Eds.), Readings in Intelligent User Interfaces. Morgan Kaufmann Publishers, 1998.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.