Download presentation
Presentation is loading. Please wait.
Published byFerdinand Benson Modified over 6 years ago
1
DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE
SNS College of Engineering Department of Computer Science and Engineering DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE
2
WHAT ARE MULTIMODAL SYSTEMS, AND WHY ARE WE BUILDING THEM?
Multimodal systems process two or more combined user input modes—such as speech, pen, touch, manual gestures, gaze, and head and body movements—in a coordinated manner with multimedia system output. New direction for computing. Shift away from conventional WIMP interfaces.
3
Cont.. It aims to recognize naturally occurring forms of human language and behavior, which incorporate at least one recognition-based technology (e.g., speech, pen, vision). GOAL: To support more transparent, flexible, efficient, and powerfully expressive means of human computer interaction.
4
Expectations on a multimodal interface
To be easier to learn and use Potential to expand computing to more challenging applications To accommodate more adverse usage conditions than in the past. Potential to function in a more robust and stable manner than unimodal recognition systems
5
TYPES AND THEIR HISTORY
Multimodal systems have developed rapidly during the past decade Multimodal systems also have diversified to include new modality combinations, including speech and pen input speech and lip movements speech and manual gesturing Gaze tracking and manual input
6
Multimodal applications
Multimodal map-based systems for mobile and in-vehicle use Multimodal browsers Multimodal interfaces to virtual reality systems for simulation and training Multimodal person-identification/verification systems for security purposes Multimodal medical, educational, military, and web-based transaction systems
7
Cont.. Multimodal access and management of personal information on handhelds and cell phones “Put That There” interface by Bolt,1980 Earliest multimodal systems supported speech input along with a standard keyboard and mouse interface. Examples are: CUBRICON, Georal, Galaxy, XTRA, Shoptalk, and Miltalk (Cohen et al., 1989; Kobsa et al., 1986; Neal & Shapiro, 1991; Seneff, Goddeau, Pao, & Polifroni, 1996; Siroux, Guyomard, Multon, & Remondeau,1995; Wahlster, 1991) Multimodal-multimedia map systems
8
More recent multimodal systems are based on two parallel input streams i.e recognize two natural forms of human language and behavior Examples: speech and pen input (ex:QuickSet) speech and lip movements Multimodal systems that process speech and continuous 3D manual gesturing are emerging rapidly because of the challenges associated with segmenting and interpreting continuous manual movements
9
New kinds of multimodal systems are incorporating vision-based technologies, such as
interpretation of gaze facial expressions head nodding gesturing large body movements
10
Multimodal interface terminology
Active input modes -deployed by the user intensionally as an explicit command to a computer system (eg :speech) Passive input modes -naturally occurring user behaviour or actions recognized by a computer (eg: facial expressions,manual gestures) Blended multimodal interfaces -atleast one passive and one active i/p mode (eg:speech and lip movements)
11
Temporally –cascaded multimodal interfaces
-two or more user modalities sequenced in a particular temporal order (eg :gaze,gesture,speech) Mutual disambigution - disambigution of signal in one error prone input mode from partial information supplied by another Simultaneous integrator - user who habitually presents two input signals (eg:speech,pen) in a temporally overlapped manner Sequential integrator - user who habitually seperates two input signals
12
-involves both sequential and simultaneous integrators
Multimodal hypertiming -involves both sequential and simultaneous integrators Visemes -detailed classification of visible lip movements that correspond with consonants and vowels Feature level fusion - fusing low-level feature information from parallel input signals (eg :speech and lip) Semantic level fusion -integrating semantic information derived from parallel input modes (eg :speech and gesture)
13
Apart from the developments within research-level systems, multimodal interfaces are commercialized as products in areas like personal information access and management on handhelds and cell phones Eg :microsoft’s handheld Mipad Kirusa’s cell phone mobile map based systems Systems for safety –critical medical and military applications Eg :Natural Interaction Systems
14
GOALS AND ADVANTAGES OF MULTIMODAL INTERFACE DESIGN?
Multimodal interfaces permit flexible use of input modes Since individual input modalities are well suited in some situations, and less ideal or even inappropriate in others, modality choice is an important design issue in a multimodal system. A multimodal interface permits diverse user groups to exercise selection and control over how they interact with the computer
15
For example, a visually impaired user may prefer speech input and text-to speech output.
A user with a hearing impairment may prefer touch, gesture, or pen input. It provide the adaptability that is needed to accommodate the continuously changing conditions of mobile use. In earlier days efficiency gain was assumed to be the main advantage especially when manipulating graphical information. Users’ efficiency improved when they combined speech and gestures multimodally to manipulate 3D objects
16
One particularly advantageous feature of multimodal interface design is its superior error handling, both in terms of error avoidance and graceful recovery from errors User-centered and system-centered reasons User-centered reasons users will select the input mode that they judge to be less error prone users’ language often is simplified when interacting multimodally, which can substantially reduce the complexity users have a strong tendency to switch modes after system recognition errors, which facilitates error recovery
17
system-centered reasons
mutual disambiguation of input signals To achieve optimal error handling, a multimodal interface ideally should be designed to include complementary input modes
19
Cont… Another advantage is minimizing user’s cognitive load.
As task complexity increases, users self manage their working memory limits by distributing information across multiple modalities, which in turn enhances their task performance Eg :visual-spatial “sketch pad”
20
METHODS AND INFORMATION NEEDED TO DESIGN NOVEL MULTIMODAL INTERFACES?
The design of new multimodal systems has been inspired and organized largely by two things Cognitive science literature High-fidelity automatic simulations
21
cognitive science literature
Given the complex nature of users, it plays an essential role in guiding the design of robust multimodal systems High-fidelity automatic simulations helps in prototyping new types of multimodal systems. stages: planning stages, design sketches and low-fidelity mock-ups higher-fidelity simulation It involves user, a simulated front-end, a programmer assistant at a remote location .
22
Advantages of High-fidelity automatic simulations
relatively easy inexpensive to adapt permit researchers to alter a planned system’s characteristics rapid adaptation and investigation of planned system features evaluation of critical performance tradeoffs
23
summary a well-designed multimodal system not only can perform more robustly than a unimodal system, but also in a more stable way across varied real-world users and usage contexts. To support the further development and commercialization of multimodal systems, additional infrastructure that will be needed in the future includes (a) simulation tools for rapidly building and reconfiguring multimodal interfaces, (b) automated tools for collecting and analyzing multimodal corpora, (c) automated tools for iterating new multimodal systems to improve their performance
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.