Evaluation: Analyzing results

Slides:



Advertisements
Similar presentations
A GOAL-BASED FRAMEWORK FOR SOFTWARE MEASUREMENT
Advertisements

Part 4: Evaluation Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what, where, and when Chapter.
Evaluation: Inspections, Analytics & Models
Review an existing website Usability in Design. to begin with.. Meeting Organization’s objectives and your Usability goals Meeting User’s Needs Complying.
CSI-553 Internet Information Presented by: Ignacio Castro June 28, 2006 Internet Usability.
Usability Methods: Cognitive Walkthrough & Heuristic Evaluation Dr. Dania Bilal IS 588 Spring 2008 Dr. D. Bilal.
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
Multivariate Data Analysis Chapter 8 - Canonical Correlation Analysis.
‘Less is more’ in online user usability Marianne Goguillon 10 th Commemorative International Conference on Productivity & Quality Research February 16,
Chapter 6 : Software Metrics
Usability. Definition of Usability Usability is a quality attribute that assesses how easy user interfaces are to use. The word "usability" also refers.
ICSE2006 Far East Experience Track Detecting Low Usability Web Pages using Quantitative Data of Users’ Behavior Noboru Nakamichi 1, Makoto Sakai 2, Kazuyuki.
Usability Expert Review Anna Diubina. What is usability? The effectiveness, efficiency and satisfaction with which specified users achieve specified goals.
Chapter 12 Evaluating Products, Processes, and Resources.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Regression Chapter 16. Regression >Builds on Correlation >The difference is a question of prediction versus relation Regression predicts, correlation.
Chapter 15: Analytical evaluation Q1, 2. Inspections Heuristic evaluation Walkthroughs Start Q3 Reviewers tend to use guidelines, heuristics and checklists.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Assessment of Google Flights Lanny Chung Junior at Bentley University.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
1 Web Site Usability Motivations of Web site visitors include: –Learning about products or services that the company offers –Buying products or services.
USABILITY Ben Aaron.
Why Database Management is Important for Well-Performing Companies.
Chapter 11 Analysis of Variance
Introduction to Marketing Research
COM 295 STUDY Inspiring Minds/com295study.com
PresQT Workshop, Tuesday, May 2, 2017
What Do We Mean by Usability?
SIE 515 Design Evaluation Lecture 7.
Human Computer Interaction
AP Statistics FINAL EXAM ANALYSIS OF VARIANCE.
PSY 325 aid Something Great/psy325aid.com
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Chapter 1 - Introduction
Statistics: The Z score and the normal distribution
psy 325 aid Expect Success/psy325aiddotcom
Usability engineering
Usability engineering
Big-Data Fundamentals
Usability ECE 695 Alexander J. Quinn 3/21/2016.
It’s On the Way: Conducting Library Website Usability Test
Data Collection and Analysis
2 independent Groups Graziano & Raulin (1997).
Creation of an interactive system
Interactions & Simple Effects finding the differences
Analyzing Reliability and Validity in Outcomes Assessment Part 1
HCI – DESIGN RATIONALE 20 November 2018.
Tabulations and Statistics
Evaluation ECE 695 Alexander J. Quinn March 30, 2018.
Student Satisfaction Results
Chapter 13 Group Differences
Chapter 26 Inspections of the user interface
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE
Chapter 12 Power Analysis.
Analyzing the Results of an Experiment…
Evaluation.
USABILITY PART of UX.
COMP444 Human Computer Interaction Usability Engineering
Testing & modeling users
Analyzing Reliability and Validity in Outcomes Assessment
Metrics for Process and Projects
Chapter 10 Introduction to the Analysis of Variance
CHAPTER 2: Guidelines, Principles, and Theories
Psych 231: Research Methods in Psychology
TRACE INITIATIVE: Data Use
Evaluation: Inspections, Analytics & Models
ANalysis Of VAriance Lecture 1 Sections: 12.1 – 12.2
Usability Created by Brett Oppegaard for Washington State University's
Evaluation: Inspections, Analytics, and Models
Presentation transcript:

Evaluation: Analyzing results ECE 695 Alexander J. Quinn 4/20/2016

Finish study design exercise from Monday. Today Project questions references, overall structure Finish study design exercise from Monday. Discuss questions from today’s reading. Analyzing data

Q: “Factors for usability are Learnability, Efficiency, Memorability, Errors, Satisfaction. In this chapter how to measure Memorability and Satisfaction were not discussed.” --CM A: The types of performance metrics in this article are not directly connected to Jakob Nielsen’s metrics. Jakob Nielsen. 1996. Usability Metrics: Tracking Interface Improvements. IEEE Software 13, 6 (November 1996), 12-13.

Learnability is a way to measure how performance changes over time. Task success is perhaps the most widely used performance metric. It measures how effectively users are able to complete a given set of tasks. Two different types of task success will be reviewed: binary success and levels of success. Time-on-task is a common performance metric that measures how much time is required to complete a task. Errors reflect the mistakes made during a task. Errors can be useful in pointing out particularly confusing or misleading parts of an interface. Efficiency can be assessed by examining the amount of effort a user expends to complete a task, such as the number of clicks in a website or the number of button presses on a cell phone. Learnability is a way to measure how performance changes over time. Credit: Measuring the User Experience + Jacob Nielson, Usability Metrics: Tracking Interface Improvements, 1996.

Q: “How to combine these five performance metrics?” --JZ A: Crisp user goals will help you form clear questions to guide your study. From there, the types of performance metrics described in this chapter are just tools for answering your questions.

Q: “Can you easily test for levels of success in an online test?” --SC A: Non-answer: It depends on your study design.

Q: “Is there a guideline for how long a certain type of task should take?” --SC A: Non-answer: Use MHP, KLM, etc. Real answer: Ideally, you are always comparing to a meaningful baseline.

Q: “This chapter said that it is not always easy to collect error data. It will be better if there were some examples to show how to collect error data in the generic task performance evaluation.” --CM A: This will be task-dependent. Measurable behaviors such as backtracking or stray clicks may be a good proxy

Analysis

Key questions for any empirical evaluation What are independent variables (IVs)? What type? How many? What are dependent variables (DVs)? Do you need to control for other factors?

Credit: Anne Marenco, http://www. csun

The ANOVA F-test The ANOVA F-statistic compares variation due to specific sources (levels of the factor) with variation among individuals who should be similar (individuals in the same sample). Difference in means large relative to overall variability Difference in means small relative to overall variability  F tends to be small  F tends to be large Larger F-values typically yield more significant results. How large depends on the degrees of freedom (I − 1 and N − I). Credit: W. H. Freeman and Company, 2009 – verbatim