Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008.

Similar presentations


Presentation on theme: "Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008."— Presentation transcript:

1 Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008

2 NRC Meeting Windsor June 2008 Content of presentation Purposes of field trial analysis Methodologies applied –IRT Rasch model –Factor analysis Criteria used –Fit indices –Item and scale statistics

3 NRC Meeting Windsor June 2008 Purposes of field trial Test feasibility of construct measurement Review scalability of item material Check test and questionnaire length Compare different formats (items with and without “don’t know” category) Inform on relationships between constructs and variables Compare results from on-line and paper surveys

4 NRC Meeting Windsor June 2008 Data included in analysis Data from 31 countries included in international analyses Instrument data from –Student test (98 cognitive items) –Student questionnaire –Teacher questionnaire –School questionnaire –Regional instruments (cognitive and questionnaire data) Comparison of on-line and paper surveys (international option)

5 NRC Meeting Windsor June 2008 Types of analysis Review of frequencies and means Review of correlations between variables and constructs Review of reliabilities and item-score correlations IRT (Rasch) Scaling results Exploratory and Confirmatory Factor Analysis

6 NRC Meeting Windsor June 2008 Analysis reports Part 1 in NRC(June08)2.pdf –Analysis of cognitive test items –Analysis of student questionnaire data –Four appendix documents (a, b, c and d) Part 2 in NRC(June08)3.pdf –Analysis of teacher questionnaire data –Analysis of school questionnaire data –Two appendix documents (a and b) –Addition document on comparison of paper and online mode (NRC(June08)3c.pdf)

7 NRC Meeting Windsor June 2008 Cognitive test data analysis Review of omitted, invalid and “not reached” responses Analysis of item difficulties, discrimination and Rasch model fit Differential item functioning –Gender groups –Countries Analysis of dimensionality Trend item analysis

8 NRC Meeting Windsor June 2008 Not reached items (medians)

9 NRC Meeting Windsor June 2008 The IRT “Rasch” Model Modelling probability of getting a correct response Modelling probability of getting an incorrect response

10 NRC Meeting Windsor June 2008 IRT curves

11 NRC Meeting Windsor June 2008 MC Item statistics (example)

12 NRC Meeting Windsor June 2008 Example

13 NRC Meeting Windsor June 2008 IRT models for categorical data Extension of Rasch model with additional step parameters  Partial credit model has different step parameters for each item

14 NRC Meeting Windsor June 2008 Partial Credit Model – probabilities

15 NRC Meeting Windsor June 2008 Test item difficulties and abilities

16 NRC Meeting Windsor June 2008 Gender DIF Gender effect directly estimated with ACER ConQuest Reflects difference in logits if item parameters had been estimated separately for males and females Differences for combined effect > 0.3 flagged (effect * 2) –DPC item stats: separate effects reported Generally few cognitive test items had Gender DIF

17 NRC Meeting Windsor June 2008 Item-by-country interaction

18 NRC Meeting Windsor June 2008 Item dimensionality No clear pattern of dimensionality with regard to old CIVED and new ICCS items –High correlation between the two test parts (0.87) High correlation for sub-dimensions –Cognitive dimensions: 0.89 –Content dimensions: 0.93

19 NRC Meeting Windsor June 2008 Review of CIVED trend items

20 NRC Meeting Windsor June 2008 Coder reliability

21 NRC Meeting Windsor June 2008 Summary for test item analysis Very positive results regarding scalability of test items Support for uni-dimensionality of test items Few items to be deleted or modified Open-ended items performed generally well (except one item)

22 NRC Meeting Windsor June 2008 National reports Purpose: Checking of national item statistics and review of possible explanations Only for cognitive test items Graphical displays

23 NRC Meeting Windsor June 2008 Item Stats in graphical form

24 NRC Meeting Windsor June 2008 National item fit and discrimination

25 NRC Meeting Windsor June 2008 National Item difficulties and thresholds

26 NRC Meeting Windsor June 2008 National item review list

27 NRC Meeting Windsor June 2008 International Summary

28 NRC Meeting Windsor June 2008 Item statistics DPC provided NRCs with item statistics Review of –category frequencies –point biserials (correlations) –Rasch parameter and fit –Gender DIF information –Difficulty in percentage correct (national and international)

29 NRC Meeting Windsor June 2008 Questionnaire item analysis Review of frequencies (including for omitted and invalid responses) Comparison of different formats (with and without “don’t know” categories) Analysis of scaling properties (reliabilities, Rasch modelling) Analysis of dimensionality Analysis of relationships between variables and constructs

30 NRC Meeting Windsor June 2008 Correlations Reporting of Pearson’s correlation coefficients Used to review whether expected relationships are found in data (e.g. correlations between indicators of social background) Correlation with test performance regularly reported for student scales Criteria (not “scientific”): –< 0.1Not substantial –0.1 – 0.2 Weak –0.2 – 0.5Moderate –> 0.5Strong

31 NRC Meeting Windsor June 2008 Reliabilities Cronbach’s alpha coefficient –Is influenced by number of items! Item-by-total correlation Criteria –< 0.60Poor –0.60 – 0.70Marginally satisfactory –> 0.70Good

32 NRC Meeting Windsor June 2008 Example of scale information

33 NRC Meeting Windsor June 2008 Exploratory factor analysis Used for exploring dimensionality for sets of items VARIMAX rotation –Assumes factors to be uncorrelated PROMAX rotation –Assumes factors to be correlated Not always reported as it was used primarily in preliminary analysis steps

34 NRC Meeting Windsor June 2008 Confirmatory Factor Analysis Model estimation based on variances and covariances –LISREL and SAS CALIS estimates –Maximum Likelihood (items assumed to be continuous) Analysis to confirm expected factor structure Model fit indices indicate whether the model “fits the data”

35 NRC Meeting Windsor June 2008 Example of CFA

36 NRC Meeting Windsor June 2008 CFA by country

37 NRC Meeting Windsor June 2008 Fit indices RMSEA (Root mean squared error approximation) –> 0.10Poor model fit of) –0.10 – 0.05Marginally satisfactory model fit –< 0.05Close model fit RMR –> 0.10Poor model fit –0.10 – 0.05Marginally satisfactory model fit –< 0.05Close model fit CFI (Comparative fit index) and NNFI (Non-normed fit index) –< 0.70Poor model fit –0.80 - 0.90Marginally satisfactory model fit –>.90Close model fit

38 NRC Meeting Windsor June 2008 IRT models for categorical items Partial credit model models the response probability for each depending on the latent trait  –Item location parameter  –Step parameter 

39 NRC Meeting Windsor June 2008 ACER ConQuest models ITEM+ITEM*STEP: Constrained model –Assumes all item parameters to be equal across countries ITEM-CNT+ITEM*CNT+ITEM*STEP: Unconstrained model –Assumes item location parameters to be different across countries

40 NRC Meeting Windsor June 2008 Item-by-country interaction Item-by-country interaction effects sum up to zero For review, the median of the absolute values was taken as an indicator of measurement invariance across countries Those values > 0.3 logits were interpreted as items with large item-by- country interaction

41 NRC Meeting Windsor June 2008 IRT result tables

42 NRC Meeting Windsor June 2008 Scope of analysis Given the narrow timeframe for analysis not all of these analyses were carried out for all instruments Reviews of frequencies, computation of scale reliabilities and exploratory factor analyses were done for all field trial instruments

43 NRC Meeting Windsor June 2008 Analysis of relationships between context variables Correlation of school, teacher and student data aggregated at the school level –Results show quite a few of the expected relationships Single-level regression analysis for test performance and expected electoral participation –25 percent of variance in test performance and 21 percent of variance in index of expected electoral participation explained by models

44 NRC Meeting Windsor June 2008 General outcomes Good scaling properties for most items –Some constructs and items not retained due to poor results Comparison of formats –No substantial differences in outcomes but large differences in missing values –Proposal to omit “don’t know” categories Encouraging results for measurement of socio-economic student background –Proposal not to retain household possession items

45 NRC Meeting Windsor June 2008 Questions or comments?


Download ppt "Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008."

Similar presentations


Ads by Google