Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Development and Evaluation of a Survey to Measure User Engagement Heather L. O’Brien Elaine G. Toms Presented By Jesse Coultas University of.

Similar presentations


Presentation on theme: "The Development and Evaluation of a Survey to Measure User Engagement Heather L. O’Brien Elaine G. Toms Presented By Jesse Coultas University of."— Presentation transcript:

1 The Development and Evaluation of a Survey to Measure User Engagement Heather L. O’Brien Elaine G. Toms Presented By Jesse Coultas University of Illinois at Chicago

2 Overview Development of a survey tool to measure user engagement based on the author’s previous research to provide a consistent definition of research.

3 Research Questions How can we measure engagement?
Would the attributes reported in O’Brien & Toms (2008) be supported in the current research? What is the nature of the relationships amongst these attributes?

4 Attributes of Engagement
Aesthetics Feedback Affect Interest Focused Attention Motivation Novelty Challenge Perceived Time Control

5 Why a Survey Tool? Chosen based on past research for collecting user perceptions Ease of application for evaluating future projects/products Provides data on both user experience and perceptions

6 Survey Item Sources Existing Scales from the literature n=109
Interview transcripts from authors previous study n=350

7 Survey Item Evaluation
Duplication/Repetitiveness Potential applicability to HCI environments and experience management outcomes Potential to be used across a range of computer applications

8 Survey Item Eval Method
All items evaluation by Nonaffiliated researcher First Author Item mapped to attribute with rationale and if should be retained Discussion of items to reach consensus with second author acting as tie breaker.

9 Construction Items formatted as statements with appropriate tone
Five point Likert scale used (agree/disagree) with N/A option Items presented randomized Pretesting using a convenience sample reduced 186 items to 123

10 Study 1 Establish Scale

11 Objectives To assess each item to ensure that the instrument contained only the most parsimonious set of items. To evaluate the reliability of the subscales constructed for each attributed. To examine the reliability of the overall instrument.

12 Method Multi-Page Online Survey Introduction Informed Consent
Demographics Survey Items (3-12 pages) about last online shopping experience Concluding Page Prize entry at end of survey Progress bar shown

13 Recruitment In-Person Visiting undergrad classes
Placing notices around campus Online Forums Discussion Boards Listservs Snowball Sampling

14 Data Preparation Reverse-Coded Items Used to detect acquiescence bias
Items scale reversed Not Applicable Items marked as having missing data Eliminated based on nonresponse rate, variability of response, and negative correlation with other items

15 Reliability Seeking Cronbach's Alpha between 0.7 – 0.9
The authors say that they got rid of negatively correlating items, is that a normal procedure when doing EFA ? -Aditi Reliability Seeking Cronbach's Alpha between 0.7 – 0.9 Iterative reduction of items Eliminating items with lowest item-total correlations Minimize number of questions Reduced to 49 items

16 Exploratory Factor Analysis
Kaiser-Meyer-Olkin Measure of Sampling Adequacy (0.94) >= 0.50 indicates if variance caused by underlying factors[1] Bartlett’s Test of Sphericity (p < ) p < 0.05 indicates possible relationship between data [1] [1]

17 Exploratory Factor Analysis (continued)
The Comrey and Lee's criteria are based on Cronbach's alpha? -Aditi Exploratory Factor Analysis (continued) Varimax Rotation Orthogonal rotation to maximize variance Comrey and Lee Criteria Interpretation of overlapping variance between variable and factor Cutoff at 0.45 (20% overlapping variance)

18 Interpreted Factors Focused Attention (7 items)
However, i did not understand were the 6 attributes in user engagement model first though off before doing the EFA or did EFA allowed them to infer the attributes according to the results (The paper indicates that they had these 10 attributes and they retained the names and but adjusted the questions in the attributes according to the analysis). How is it done conventionally? -Aditi Interpreted Factors Focused Attention (7 items) Perceived Usability (8 items) Aesthetics (5 items) Endurability (5 items) Novelty (3 items) Felt Involvement (3 items) 31 Items

19 Study 2 Test Validity of Scale

20 Objectives Assess discriminate validity
Would we obtain similar results with another sample? Determine relationships between each factor. Model the relationships Confirm the model

21 Structural Equation Modeling
I understand why CFA is done, as the CFA tries to find the relationships among the attributes, would that have been informed (not direction of causality but presence) through correlation during EFA? - Aditi Structural Equation Modeling Confirmatory Factor Analysis Compare factor structure between study 1 and 2 Path Analysis Examine predictive relationships among the resulting factors

22 Hypothesis The Engagement Scale is comprised of the six identified factors. Factor prediction is as shown below.

23 Methodology Online survey using 31 items developed during study 1
I think that recalling the last shopping experience for answering the survey requires a higher mental workload as well as participants may not recall all the aspects of the experience. -Nina Methodology Online survey using 31 items developed during study 1 Sent to customers of a major online book retailer that made a purchase in the past 3 months No questions excluded from analysis

24 Consistency All but Novelty Consistent Principal Components Analysis
Novelty formed factor with factor loadings between PCA between other factors indicated the scale was multidimensional

25 SEM: Proposed Model 75% (478) responses used to test proposed model
Model was not well fitting Root Mean Square Error of Approximation (RMSEA = 0.323) Goal is < 0.10[1] [1] Hu and Bentler (1999) indicates cutoff value of 0.06 is optimal. Li‐tze Hu & Peter M. Bentler (1999) Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives, Structural Equation Modeling: A Multidisciplinary Journal, 6:1,1- 55, DOI:  / Chi-Square (x2) – lower is better Degrees of Freedom (df) Comparative Fit (CFI) Goodness of Fit (GFI) above 0.90 Normed Fit (NFI) above 0.90

26 Final Model 0.26 Felt Involvement 0.46 Aesthetics 0.2 0.19
Perceived Usability Endurability 0.42 0.47 0.4 Novelty 0.25 Focused Attention Indicates relationships that were present but not predicted Indicates a hypothesized relationship that was not present 0.44

27 SEM: Final Model RMSEA = using first dataset, indicating well-fitting model. 25% (143) used to confirm final model Similar results shown RMSEA = with p = 0.069 GFI=0.97 and NFI=0.98 Indicating well-fitting model Goodness of Fit (GFI) above 0.90 Normed Fit (NFI) above 0.90

28 Discussion Items Is using leading questions justified in some circumstances? –Aditi Study: The study was administered to people when doing EFA, not having considered the motivation or the actual website the users used the instrument for, does that effect the EFA results? –Aditi Authors did not take the websites that users used the instrument for into account, which may affect the results. - Pantea In their second study, the response rate in the sample was low, and the result may be biased because they did not have a heterogeneous sample of different genders. - Pantea This work only considered online shopping, so there should also be other validating studies for other areas. -Pantea


Download ppt "The Development and Evaluation of a Survey to Measure User Engagement Heather L. O’Brien Elaine G. Toms Presented By Jesse Coultas University of."

Similar presentations


Ads by Google