Validation of the NHS Scotland Employee Engagement Index

Slides:



Advertisements
Similar presentations
CHAPTER TWELVE ANALYSING DATA I: QUANTITATIVE DATA ANALYSIS.
Advertisements

Original Figures for "Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring"
Learning Objectives 1 Copyright © 2002 South-Western/Thomson Learning Data Processing and Fundamental Data Analysis CHAPTER fourteen.
Models for Measuring. What do the models have in common? They are all cases of a general model. How are people responding? What are your intentions in.
Voice Project Survey Report (c) Voice Project Pty Ltd & Access Macquarie Ltd – Overview Of Results Page 1 Guidelines For Interpretation Of Results.
Reporting tools Webinar Thursday 5 th September 2013.
Effect Size and Meta-Analysis
Data Analysis Statistics. OVERVIEW Getting Ready for Data Collection Getting Ready for Data Collection The Data Collection Process The Data Collection.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
2 Enter your Paper Title Here. Enter your Name Here. Enter Your Paper Title Here. Enter Your Name Here. ANALYSIS OF THE RELATIONSHIP BETWEEN JOB SATISFACTION.
WHAT DO THEY ALL MEAN?. 4 ways to describe data The Mean – The average number in a data set. The Median – The middle number in a data set….. 50% of the.
Descriptive Statistics Used to describe the basic features of the data in any quantitative study. Both graphical displays and descriptive summary statistics.
Variable  An item of data  Examples: –gender –test scores –weight  Value varies from one observation to another.
1 Excursions in Modern Mathematics Sixth Edition Peter Tannenbaum.
User Study Evaluation Human-Computer Interaction.
1 Cronbach’s Alpha It is very common in psychological research to collect multiple measures of the same construct. For example, in a questionnaire designed.
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Developing a Tool to Measure Health Worker Motivation in District Hospitals in Kenya Patrick Mbindyo, Duane Blaauw, Lucy Gilson, Mike English.
Writing the “Results” & “Discussion” sections Awatif Alam Professor Community Medicine Medical College/ KSU.
Mearns (1996, 1997) - an extension of Rogers’ (1957) facilitative conditions of therapeutic change. Mearns (2003) - serves as a distinctive hallmark of.
National Staff Experience Evolutionary Journey Liz Reilly National Staff Experience Project Manager.
TYPES OF STATISTICAL METHODS USED IN PSYCHOLOGY Statistics.
Using DataData. Why do we need to deal with data ? In the context of what we do in Qatar the answer could perhaps best be ‘To sumarise and present large.
Experimental Research Methods in Language Learning Chapter 9 Descriptive Statistics.
Descriptive Research Study Investigation of Positive and Negative Affect of UniJos PhD Students toward their PhD Research Project Dr. K. A. Korb University.
11/23/2015Slide 1 Using a combination of tables and plots from SPSS plus spreadsheets from Excel, we will show the linkage between correlation and linear.
National Staff Experience Project Liz Reilly National Staff Experience Project Manager Enabling Staff Experience.
OneVoice W Group Results 16 June 2014 Human Resources Employee Engagement.
BASIC STATISTICAL CONCEPTS Chapter Three. CHAPTER OBJECTIVES Scales of Measurement Measures of central tendency (mean, median, mode) Frequency distribution.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Introduction to statistics I Sophia King Rm. P24 HWB
Outline of Today’s Discussion 1.Displaying the Order in a Group of Numbers: 2.The Mean, Variance, Standard Deviation, & Z-Scores 3.SPSS: Data Entry, Definition,
Why do we analyze data?  It is important to analyze data because you need to determine the extent to which the hypothesized relationship does or does.
Why do we analyze data?  To determine the extent to which the hypothesized relationship does or does not exist.  You need to find both the central tendency.
Lesson objective To understand how to draw and write up a questionnaire Success criteria: Build – discuss what makes a good question in a questionnaire.
Statistics & Evidence-Based Practice
The Basics of Social Science Research Methods
MSS 905 Methods of Missiological Research
Overview Modern chip designs have multiple IP components with different process, voltage, temperature sensitivities Optimizing mix to different customer.
Reliability Analysis.
Selection of appropriate instruments/Validation of the Instrument It is important to ensure that instruments measures what they are designed to measure.
Measurements Statistics
Different Types of Data
Items in red require your input
CHAPTER 6, INDEXES, SCALES, AND TYPOLOGIES
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Objectives The student will be able to:
Improving Student Engagement Through Audience Response Systems
Discussion Discuss your results.
Statistics: The Z score and the normal distribution
Univariate Statistics
CHAPTER 2: PSYCHOLOGICAL RESEARCH METHODS AND STATISTICS
Calculating Reliability of Quantitative Measures
Delaware School Survey Data Understand, Interpret, Use & Share
Items in red require your input
Items in red require your input
Created by Mr Lafferty Maths Dept
Introduction to Statistics
Basic Statistical Terms
Scottish Health Survey What we know so far
Using statistics to evaluate your test Gerard Seinhorst
Reliability Analysis.
Basic Statistics for Non-Mathematicians: What do statistics tell us
Numerical Descriptive Statistics
Workforce Engagement Survey
8.3 Estimating a Population Mean
Understanding How the Ranking is Calculated
Delaware School Survey Data Understand, Interpret, Use & Share
  Using the RUMM2030 outputs as feedback on learner performance in Communication in English for Adult learners Nthabeleng Lepota 13th SAAEA Conference.
Presentation transcript:

Validation of the NHS Scotland Employee Engagement Index Presentation to SWAG Dr Austyn Snowden

NHSEEI overview Bespoke Sample and assumptions Descriptive statistics Summary Inferential statistics Conclusions & recommendations

Bespoke

Sample and return in pilot 3 Jan – Feb 2013 the pilot 3 version of the NHSEEI (Bespoke) was distributed to 2300 NHSScotland staff across three boards: NHS Forth Valley, NHS Tayside and NHS Golden Jubilee. 1280 people returned the questionnaire in total, an excellent return of over 56% those staff targeted, and more than double the 27% return of the 2010 staff survey. 1193 studied after missing data removed.

Assumptions: raw data v logical data Items 4, 16 and 22 are reversed items. They start ‘I do not…’. A positive response (eg strongly agree) to these items therefore logically translates to a negative response. The participant is expressing a negative view by positively endorsing the item. We therefore draw distinctions here between raw data and logical data. Descriptive data is presented as raw data. From an inferential statistical perspective we use logical data.

NHSEEI sample (N=1193)

NHSScotland workforce

1193 responses:Positive v negative (Raw data) Median

Likert response summaries ranked positive to negative (Raw data) Median

About me (*raw data) Median

My team (*raw data) Median

My organisation (*raw data) Median

NHSEEI calculations (logical data) Individual and team NHSEEI is calculated by summing all the scores for each category and weighting them as follows (reversed in reversed items): Strongly Agree = 6 Agree = 5 Slightly agree = 4 Slightly disagree = 3 Disagree = 2 Strongly Disagree = 1 This is then turned into % by dividing by 168 [the maximum possible total (28x6)] and multiplying by 100. So someone’s total response can be calculated as follows: (ax6)+(bx5)+(cx4)+(dx3)+(ex2)+(fx1) To calculate team NHSEEI we add all individual NHSEEI scores and divide by number in the team

Total Spread of NHSEEI responses Mean 69%. Likert category: Agree

Total Spread of NHSEEI responses Mean 69%. Likert category: Agree

Representing team NHSEEI Team NHSEEI represents average response of a particular subgroup Team NHSEEI could be respresented as a point on the improve monitor celebrate scale We will use profession to illustrate some possibilities

Mean NHSEEI by profession Likert equivalent: agree

Mean NHSEEI by profession 66% and below 67% and above

Team NHSEEI (calculable for any subgroup) Managers Support Services

Summary of descriptive data Mean staff response to the questionnaire is positive. Median overall response is ‘Agree’ once reversed items removed. Managers are the most engaged at average 77% NHSEEI. Support services average 62% (slightly agree) Any team NHSEEI can be calculated in the same way.

Total spread of item 29 responses Mean. 6.36

Item 29. Same calculation different representation

Correlation between NHSEEI & item 29

Summary There is a very significant correlation between item 29 (overall within my organisation…) and NHSEEI. Item 29 could offer a useful snapshot summary. Item 29 should never be used in isolation however as it has not been tested independently of the NHSEEI.

Factor analysis Reducing 28 items into grouped ‘factors’ means we can better interpret the meaning of the whole. This is done by ascertaining correlations between patterns of responses then explaining the patterns.

Staff governance standards Factor structure Staff coproduction McLeod Enablers Staff governance standards Quality Strategy Because of this We know this is valid NHSEEI Factor 1 About me Factor 2 About my organisation Factor 3 About my line manager Factor 4 About my team 3 4 5 6 1 2 And this

Rasch Analysis Tests the assumption that we are measuring one underlying trait: staff engagement. Identifies items that don’t fit this assumption. Identifies how difficult individual items are. Identifies how capable individuals are.

All items Hardest item Easiest item

Reverse items removed. Good overall fit

Person item map Item spread. Nicely covers the person spread Person spread mainly positive (>0) Strongly agree/agree Agree/Slightly agree Slightly agree/Slightly disagree Slightly Disagree/Disagree Disagree/Strongly Disagree The logit scale represents the item difficulty and the person ‘ability’ on the same scale. This means we can estimate likely response if we know person ability and vice versa It is a snapshot of person ability within this sample contrasted with the likelihood of a particular Likert response. The person density on the left is comparable to the information given in figure 9. The item frequencies on the right of the diagram represent the step parameters (item difficulty + category threshold). The category threshold is the point where the item characteristic curves for two adjacent categories intersect. It is where responding in adjacent categories is equally likely. In the NHSEEI there are five thresholds (five step parameters) because the questionnaire has six-category items. The item frequencies represent the frequency of the step parameters. The light blue bars represents the step parameter between ‘strongly disagree’ and ‘disagree’ and the dark blue represents the step parameter between ‘agree’ and ‘strongly agree’. Pink (disagree/slightly disagree), yellow (slightly disagree/slightly agree) and green (slightly agree/agree) represent the middle three parameters. This small block represents people of ‘low ability’. In this scale ability refers to engagement. It means they are the least engaged This block represents the point at which it is equally likely people will answer this question strongly disagree/disagree. It is the easiest parameter in the easiest question

Person item map Strongly agree/agree Agree/Slightly agree Slightly agree/Slightly disagree Slightly Disagree/Disagree Disagree/Strongly Disagree The Rasch analysis shows that the range of questions that make up the NHSEEI tap into a meaningful and diverse range of staff engagement. It is a useful measure across a range of abilities and thus useful measure of change It is a snapshot of person ability within this sample contrasted with the likelihood of a particular Likert response. The person density on the left is comparable to the information given in figure 9. The item frequencies on the right of the diagram represent the step parameters (item difficulty + category threshold). The category threshold is the point where the item characteristic curves for two adjacent categories intersect. It is where responding in adjacent categories is equally likely. In the NHSEEI there are five thresholds (five step parameters) because the questionnaire has six-category items. The item frequencies represent the frequency of the step parameters. The light blue bars represents the step parameter between ‘strongly disagree’ and ‘disagree’ and the dark blue represents the step parameter between ‘agree’ and ‘strongly agree’. Pink (disagree/slightly disagree), yellow (slightly disagree/slightly agree) and green (slightly agree/agree) represent the middle three parameters.

Interpretation of factor AND Rasch analysis 4 factors explain most variance, consistent with the structure of the NHSEEI: My experience as an individual My experience with my direct line manager My experience with my team My experience with my organisation Heat map scanning suggested this is a function of robust model as opposed to absence of variance in item responses. Reversed items did not fit with either the factor analysis or the Rasch analysis. The item spread nicely covers the person spread indicating the scale is a very useful and sensitive measure across the whole range of staff engagement.

Final thoughts from focus group Participants sometimes missed the managers’ name, even though it was specified on the questionnaire. All very positive about the style and layout of questionnaire. They were particularly impressed with the short time it took. Add a timescale to clarify period response refers to Despite one or two comments about reducing the scale the six point scale was broadly popular. Finally there was a very strong message that these results need to be published and fed back to participants for action. Otherwise the NHSEEI will come to be seen as a ‘paper exercise’ and disengagement will ensue.

Recommendations Remove the reversal of items 4, 16 and 22, otherwise leave the 28 items as they are. Add a timescale to specify the response period.  Keep item 29. Keep sample sizes comparably high in subsequent iterations to maintain the level of credibility achieved here.  Use summary metrics consistent with Likert response meanings: (0-33% poor, 33-66% medium, 67%+ good) Ensure participants remain involved in the feedback and evolution of NHSEEI. Test NHSEEI nationally. Rollout the NHSEEI simultaneously with any proposed continuation of previous staff survey to: establish construct validity between the two measures and to ascertain which measure staff prefer.