DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE

Slides:



Advertisements
Similar presentations
Designing Mobile Phones and Communicators for Consumers Needs at Nokia By: Kaisa Vaananen-Vainio-Mattila, Satu Ruuska Review by: Irina Ceaparu.
Advertisements

Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Lecture 1: History of Operating System
John Hu Nov. 9, 2004 Multimodal Interfaces Oviatt, S. Multimodal interfaces Mankoff, J., Hudson, S.E., & Abowd, G.D. Interaction techniques for ambiguity.
Class 6 LBSC 690 Information Technology Human Computer Interaction and Usability.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Stanford hci group / cs376 research topics in human-computer interaction Multimodal Interfaces Scott Klemmer 15 November 2005.
© 2005 Prentice Hall12-1 Stumpf and Teague Object-Oriented Systems Analysis and Design with UML.
Multimodal Interaction. Modalities vs Media Modalities are ways of encoding information e.g. graphics Media are instantiations of modalities e.g. a particular.
Queen Mary, University of London
Ch 7 & 8 Interaction Styles page 1 CS 368 Designing the Interaction Interaction Design The look and feel (appearance and behavior) of interaction objects.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Mobile Multimodal Applications. Dr. Roman Englert, Gregor Glass March 23 rd, 2006.
Personalized Medicine Research at the University of Rochester Henry Kautz Department of Computer Science.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
Systems Analysis and Design in a Changing World, 6th Edition
Gesture Recognition Using Laser-Based Tracking System Stéphane Perrin, Alvaro Cassinelli and Masatoshi Ishikawa Ishikawa Namiki Laboratory UNIVERSITY OF.
Chapter 11: Interaction Styles. Interaction Styles Introduction: Interaction styles are primarily different ways in which a user and computer system can.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Center for Human Computer Communication Department of Computer Science, OG I 1 Designing Robust Multimodal Systems for Diverse Users and Mobile Environments.
Speech User Interfaces Katherine Everitt CSE 490 JL Section Wednesday, Oct 27.
11.10 Human Computer Interface www. ICT-Teacher.com.
Lecture 6 User Interface Design
© 2007 Tom Beckman Features:  Are autonomous software entities that act as a user’s assistant to perform discrete tasks, simplifying or completely automating.
Screen design Week - 7. Emphasis in Human-Computer Interaction Usability in Software Engineering Usability in Software Engineering User Interface User.
Human-Computer Interaction
1 Human Computer Interaction Week 5 Interaction Devices and Input-Output.
Input Design Lecture 11 1 BTEC HNC Systems Support Castle College 2007/8.
Operating Systems Objective n The historic background n What the OS means? n Characteristics and types of OS n General Concept of Computer System.
Special session on Multimodal Fusion A survey: Fusion Engines for Multimodal Input 5 papers D. Lalanne (Switzerland), L. Nigay (France), P. Palanque (France),
Yonglei Tao School of Computing & Info Systems GVSU Ch 7 Design Guidelines.
GUI Meets VUI: Some Possible Guidelines James A. Larson VP, Larson Technical Services 4/21/20151© 2015 Larson Technical Services.
Stanford hci group / cs376 u Jeffrey Heer · 19 May 2009 Speech & Multimodal Interfaces.
NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
Speech and multimodal Jesse Cirimele. papers “Multimodal interaction” Sharon Oviatt “Designing SpeechActs” Yankelovich et al.
Multimodal and Natural computer interaction Evelina Stanevičienė.
MULTIMODAL AND NATURAL COMPUTER INTERACTION Domas Jonaitis.
Why Database Management is Important for Well-Performing Companies.
Design rules.
Characteristics of Graphical and Web User Interfaces
Ten Myths of Multimodal Interaction
INSTRUCTIONAL DESIGN Many definitions exist for instructional design 1. Instructional Design as a Process: 2. Instructional Design as a Discipline: 3.
Human-Computer Interaction
11.10 Human Computer Interface
Module 2… Talking with computers
Ubiquitous Computing and Augmented Realities
Advanced Scientific Visualization
Unit 2 User Interface Design.
HI 5354 – Cognitive Engineering
Chapter 6: Interfaces and interactions
Software engineering USER INTERFACE DESIGN.
Multimodal Interfaces
Design, prototyping and construction
HCI – DESIGN RATIONALE 20 November 2018.
Data Warehousing and Data Mining
Virtual Reality.
Principles/Paradigms Of Pervasive Computing
Smart Learning concepts to enhance SMART Universities in Africa
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
Speech & Multimodal Scott Klemmer · 16 November 2006.
Professor John Canny Spring 2003
Characteristics of Graphical and Web User Interfaces
Chapter 7 design rules.
Chapter 7 design rules.
Chapter 7 design rules.
Chapter 9 System Control
Professor John Canny Spring 2004
Chapter 7 design rules.
Design, prototyping and construction
Presentation transcript:

DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE SNS College of Engineering Department of Computer Science and Engineering DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE

WHAT ARE MULTIMODAL SYSTEMS, AND WHY ARE WE BUILDING THEM? Multimodal systems process two or more combined user input modes—such as speech, pen, touch, manual gestures, gaze, and head and body movements—in a coordinated manner with multimedia system output. New direction for computing. Shift away from conventional WIMP interfaces.

Cont.. It aims to recognize naturally occurring forms of human language and behavior, which incorporate at least one recognition-based technology (e.g., speech, pen, vision). GOAL: To support more transparent, flexible, efficient, and powerfully expressive means of human computer interaction.

Expectations on a multimodal interface To be easier to learn and use Potential to expand computing to more challenging applications To accommodate more adverse usage conditions than in the past. Potential to function in a more robust and stable manner than unimodal recognition systems

TYPES AND THEIR HISTORY Multimodal systems have developed rapidly during the past decade Multimodal systems also have diversified to include new modality combinations, including speech and pen input speech and lip movements speech and manual gesturing Gaze tracking and manual input

Multimodal applications Multimodal map-based systems for mobile and in-vehicle use Multimodal browsers Multimodal interfaces to virtual reality systems for simulation and training Multimodal person-identification/verification systems for security purposes Multimodal medical, educational, military, and web-based transaction systems

Cont.. Multimodal access and management of personal information on handhelds and cell phones “Put That There” interface by Bolt,1980 Earliest multimodal systems supported speech input along with a standard keyboard and mouse interface. Examples are: CUBRICON, Georal, Galaxy, XTRA, Shoptalk, and Miltalk (Cohen et al., 1989; Kobsa et al., 1986; Neal & Shapiro, 1991; Seneff, Goddeau, Pao, & Polifroni, 1996; Siroux, Guyomard, Multon, & Remondeau,1995; Wahlster, 1991) Multimodal-multimedia map systems

More recent multimodal systems are based on two parallel input streams i.e recognize two natural forms of human language and behavior Examples: speech and pen input (ex:QuickSet) speech and lip movements Multimodal systems that process speech and continuous 3D manual gesturing are emerging rapidly because of the challenges associated with segmenting and interpreting continuous manual movements

New kinds of multimodal systems are incorporating vision-based technologies, such as interpretation of gaze facial expressions head nodding gesturing large body movements

Multimodal interface terminology Active input modes -deployed by the user intensionally as an explicit command to a computer system (eg :speech) Passive input modes -naturally occurring user behaviour or actions recognized by a computer (eg: facial expressions,manual gestures) Blended multimodal interfaces -atleast one passive and one active i/p mode (eg:speech and lip movements)

Temporally –cascaded multimodal interfaces -two or more user modalities sequenced in a particular temporal order (eg :gaze,gesture,speech) Mutual disambigution - disambigution of signal in one error prone input mode from partial information supplied by another Simultaneous integrator - user who habitually presents two input signals (eg:speech,pen) in a temporally overlapped manner Sequential integrator - user who habitually seperates two input signals

-involves both sequential and simultaneous integrators Multimodal hypertiming -involves both sequential and simultaneous integrators Visemes -detailed classification of visible lip movements that correspond with consonants and vowels Feature level fusion - fusing low-level feature information from parallel input signals (eg :speech and lip) Semantic level fusion -integrating semantic information derived from parallel input modes (eg :speech and gesture)

Apart from the developments within research-level systems, multimodal interfaces are commercialized as products in areas like personal information access and management on handhelds and cell phones Eg :microsoft’s handheld Mipad Kirusa’s cell phone mobile map based systems Systems for safety –critical medical and military applications Eg :Natural Interaction Systems

GOALS AND ADVANTAGES OF MULTIMODAL INTERFACE DESIGN? Multimodal interfaces permit flexible use of input modes Since individual input modalities are well suited in some situations, and less ideal or even inappropriate in others, modality choice is an important design issue in a multimodal system. A multimodal interface permits diverse user groups to exercise selection and control over how they interact with the computer

For example, a visually impaired user may prefer speech input and text-to speech output. A user with a hearing impairment may prefer touch, gesture, or pen input. It provide the adaptability that is needed to accommodate the continuously changing conditions of mobile use. In earlier days efficiency gain was assumed to be the main advantage especially when manipulating graphical information. Users’ efficiency improved when they combined speech and gestures multimodally to manipulate 3D objects

One particularly advantageous feature of multimodal interface design is its superior error handling, both in terms of error avoidance and graceful recovery from errors User-centered and system-centered reasons User-centered reasons users will select the input mode that they judge to be less error prone users’ language often is simplified when interacting multimodally, which can substantially reduce the complexity users have a strong tendency to switch modes after system recognition errors, which facilitates error recovery

system-centered reasons mutual disambiguation of input signals To achieve optimal error handling, a multimodal interface ideally should be designed to include complementary input modes

Cont… Another advantage is minimizing user’s cognitive load. As task complexity increases, users self manage their working memory limits by distributing information across multiple modalities, which in turn enhances their task performance Eg :visual-spatial “sketch pad”

METHODS AND INFORMATION NEEDED TO DESIGN NOVEL MULTIMODAL INTERFACES? The design of new multimodal systems has been inspired and organized largely by two things Cognitive science literature High-fidelity automatic simulations

cognitive science literature Given the complex nature of users, it plays an essential role in guiding the design of robust multimodal systems High-fidelity automatic simulations helps in prototyping new types of multimodal systems. stages: planning stages, design sketches and low-fidelity mock-ups higher-fidelity simulation It involves user, a simulated front-end, a programmer assistant at a remote location .

Advantages of High-fidelity automatic simulations relatively easy inexpensive to adapt permit researchers to alter a planned system’s characteristics rapid adaptation and investigation of planned system features evaluation of critical performance tradeoffs

summary a well-designed multimodal system not only can perform more robustly than a unimodal system, but also in a more stable way across varied real-world users and usage contexts. To support the further development and commercialization of multimodal systems, additional infrastructure that will be needed in the future includes (a) simulation tools for rapidly building and reconfiguring multimodal interfaces, (b) automated tools for collecting and analyzing multimodal corpora, (c) automated tools for iterating new multimodal systems to improve their performance