Assess usability of a Web site’s information architecture: Approximate people’s information-seeking behavior (Monte Carlo simulation) Output quantitative.

Slides:



Advertisements
Similar presentations
Improving System Safety through Agent-Supported User/System Interfaces: Effects of Operator Behavior Model Charles SANTONI & Jean-Marc MERCANTINI (LSIS)
Advertisements

SEO Best Practices with Web Content Management Brent Arrington, Services Developer, Hannon Hill Morgan Griffith, Marketing Director, Hannon Hill 2009 Cascade.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 11 Designing for Usability I.
User Interface Design Yonsei University 2 nd Semester, 2013 Sanghyun Park.
Alternate Software Development Methodologies
Information Retrieval: Human-Computer Interfaces and Information Access Process.
SKELETON BASED PERFORMANCE PREDICTION ON SHARED NETWORKS Sukhdeep Sodhi Microsoft Corp Jaspal Subhlok University of Houston.
Empirically Validated Web Page Design Metrics Melody Y. Ivory, Rashmi R. Sinha, Marti A. Hearst UC Berkeley CHI 2001.
C R E S S T / U C L A Paper presented at AERA New Orleans—April 2000 Using Technology to Assess Students’ Web Expertise Davina C. D. Klein CRESST/UCLA.
SIMS 213: User Interface Design & Development Marti Hearst Thurs, March 3, 2005.
Measuring Information Architecture CHI 01 Panel Position Statement Marti Hearst UC Berkeley.
Usability Information Systems 337 Prof. Harry Plantinga.
Web TANGO Project Melody Ivory (PhD student) Rashmi Sinha (Postdoc) Marti Hearst (Research Advisor) Undergrads - Steve Demby Anthony Lee Dave Lai HCC Retreat.
Universal Access: More People. More Situations Content or Graphics Content or Graphics? An Empirical Analysis of Criteria for Award-Winning Websites Rashmi.
Automating Assessment of WebSite Design Melody Y. Ivory and Marti A. Hearst UC Berkeley
Empirically Validated Web Page Design Metrics Melody Y. Ivory, Rashmi R. Sinha, Marti A. Hearst UC Berkeley CHI 2001.
UCB HCC Retreat Search Text Mining Web Site Usability Marti Hearst SIMS.
Measuring Information Architecture Marti Hearst UC Berkeley.
IBM Almaden, Oct 2000 Automating Assessment of Web Site Usability Marti Hearst University of California, Berkeley.
Measuring Information Architecture Marti Hearst UC Berkeley.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
Architecture and Real Time Systems Lab University of Massachusetts, Amherst An Application Driven Reliability Measures and Evaluation Tool for Fault Tolerant.
Operating Systems Simulator Jessica Craddock Kelvin Whyms CPSC 410.
© Anselm Spoerri Lecture 13 Housekeeping –Term Projects Evaluations –Morse, E., Lewis, M., and Olsen, K. (2002) Testing Visual Information Retrieval Methodologies.
Ed H. Chi IMA Digital Library Workshop Ed H. Chi U of Minnesota Ph.D.: Visualization Spreadsheets M.S.: Computational Biology.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
HFWEB June 19, 2000 Quantitative Measures for Distinguishing Web Pages Melody Y. Ivory Rashmi R. Sinha Marti A. Hearst UC Berkeley.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Automating Assessment of Web Site Usability Marti Hearst Melody Ivory Rashmi Sinha University of California, Berkeley.
NEC Symposium 2000 Automating Assessment of Web Site Usability Marti Hearst University of California, Berkeley.
John, Tiffany, Anton, Anna, and Martin December 12, 2006.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
UCB CS Research Fair Search Text Mining Web Site Usability Marti Hearst SIMS.
Lecture 7 Evaluation. Purpose Assessment of the result Against requirements Qualitative Quantitative User trials Etc Assessment of and Reflection on process.
Measuring Information Architecture Marti Hearst UC Berkeley.
Empirical Foundations for Web Site Usability Marti Hearst Melody Ivory Rashmi Sinha University of California, Berkeley.
Towards Automated Web Design Advisors Melody Y. Ivory Marti A. Hearst School of Information Management & Systems UC Berkeley IBM Make IT Easy Conference.
User Centered Web Site Engineering Part 2. Developing Site Structure & Content Content View Addressing content Outlining content Creating a content delivery.
Mining the Web for Design Guidelines Marti Hearst, Melody Ivory, Rashmi Sinha UC Berkeley.
_______________________________________________________________________________________________________________ E-Commerce: Fundamentals and Applications1.
Web Design Process CMPT 281. Outline How do we know good sites from bad sites? Web design process Class design exercise.
1 Exploring Data Reliability Tradeoffs in Replicated Storage Systems NetSysLab The University of British Columbia Abdullah Gharaibeh Advisor: Professor.
Objectives  Testing Concepts for WebApps  Testing Process  Content Testing  User Interface Testing  Component-level testing  Navigation Testing.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
Patterns, effective design patterns Describing patterns Types of patterns – Architecture, data, component, interface design, and webapp patterns – Creational,
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
Heuristic evaluation Functionality: Visual Design: Efficiency:
Usability and Accessibility CIS 376 Bruce R. Maxim UM-Dearborn.
VVSG: Usability, Accessibility, Privacy 1 VVSG, Part 1, Chapter 3 Usability, Accessibility, and Privacy December 6, 2007 Dr. Sharon Laskowski
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
Active Sampling for Accelerated Learning of Performance Models Piyush Shivam, Shivnath Babu, Jeff Chase Duke University.
Algorithmic Detection of Semantic Similarity WWW 2005.
Measurement and quality assessment Framework for product metrics – Measure, measurement, and metrics – Formulation, collection, analysis, interpretation,
Lesson 2: Web Development Teams
Performance Measures. Why to Conduct Performance Evaluation? 2 n Evaluation is the key to building effective & efficient IR (information retrieval) systems.
Presented by: Sandeep Chittal Minimum-Effort Driven Dynamic Faceted Search in Structured Databases Authors: Senjuti Basu Roy, Haidong Wang, Gautam Das,
Evaluation Methods - Summary. How to chose a method? Stage of study – formative, iterative, summative Pros & cons Metrics – depends on what you want to.
After testing users Compile Data Compile Data Summarize Summarize Analyze Analyze Develop recommendations Develop recommendations Produce final report.
Network Computing Laboratory Load Balancing and Stability Issues in Algorithms for Service Composition Bhaskaran Raman & Randy H.Katz U.C Berkeley INFOCOM.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
CS 501: Software Engineering Fall 1999 Lecture 23 Design for Usability I.
Presented by Munezero Immaculee Joselyne PhD in Software Engineering
Chapter 8 – Software Testing
Web Development A Visual-Spatial Approach
Georg Umgiesser and Natalja Čerkasova
Panel on the Development, Maintenance, and Use of Course Web Sites
Chapter 26 Estimation for Software Projects.
Presentation transcript:

Assess usability of a Web site’s information architecture: Approximate people’s information-seeking behavior (Monte Carlo simulation) Output quantitative usability metrics (e.g., number of errors & navigation time) Goal: automated support for comparing design alternatives (existing and new sites) Quantitative Measures for Web Pages Current Status of the Web Models Site – node for each page (metadata, links, page complexity) Server – latency and load User – Personal and Computer Characteristics Personal - memory size, reading speed, probabilities for non-intrinsic characteristics (e.g., read a page, complete task, make an error) Computer – transfer speed Information Seeking Task(s) Target pages in site and keywords Monte Carlo Simulator For each trial Choose links based on probability function(s) Functions incorporate user models & metadata analysis (information retrieval principles) Predict metrics (e.g. loading, reading & thinking time) at each page Report quantitative usability measures (e.g. navigation time, number of errors, memory load) Report simulated navigation paths Results averaged over all trails Melody Y. Ivory Rashmi S. Sinha Marti A. Hearst Steve Demby, Anthony Lee, & Dave Lai Group for User Interface Research UC Berkeley Web TANGO: Tools for Assessing NaviGation and information Organization Simulator Architecture Research Problem 90% of sites provide inadequate usability Most problems due to poor information architectures 196 million new sites in 5 years Proposed Solution New automated methodologies and tools– Web TANGO (Tools for Assessing Navigation and Organization) –to enable designers of information-centric Web sites to compare designs. We identified key quantitative measures for distinguishing highly- rated Web pages Quantitative Measures Study Data 428 Web pages from information-centric Web sites Ranked – rated favorably via expert review or user rating on expert- chosen sites Unranked – not favorably rated (don’t assume unfavorable) Study Findings Several features are significantly associated with ranked sites Several pairs of features correlate strongly Correlations mean different things in ranked vs. unranked pages Significant features are partially successful at predicting if site is ranked (63% accuracy) Current Status Developing new metrics tool (new metrics & functionality) to repeat study Developing a design tool for comparing Web pages to ranked pages Future Work Web TANGO is a work in progress. Future work entails: Conducting an online study to correlate page composition (e.g., number of words, links, graphics, fonts, reading complexity, etc.) with perceived page complexity. Developing several user models based on observed usage. Developing navigation prediction algorithms. Implementing the simulator. Validating simulator results with user studies. Our evaluation testbed is a healthcare intranet at Kaiser Permanente that enables clinician to access a wide range of information resources. More info - Web TANGO Simulator Web TANGO Usage Scenario Represent design in TANGO (site model) Specify server parameters (server model) Specify user characteristics (user models) Specify target information (tasks) Specify starting page(s) in site) Run simulator to produce results Compare results & select best design –Word Count –Body Text Percentage –Emphasized Body Text Percentage –Text Positioning Count –Text Cluster Count –Link Count –Page Size –Graphic Percentage –Graphics Count –Color Count –Font Count –Reading Complexity