Presentation is loading. Please wait.

Presentation is loading. Please wait.

Items analysis 1. 1. Introduction Items can adopt different formats and assess cognitive variables (skills, performance, etc.) where there are right and.

Similar presentations


Presentation on theme: "Items analysis 1. 1. Introduction Items can adopt different formats and assess cognitive variables (skills, performance, etc.) where there are right and."— Presentation transcript:

1 Items analysis 1

2 1. Introduction Items can adopt different formats and assess cognitive variables (skills, performance, etc.) where there are right and wrong answers, or non-cognitive variables (attitudes, interests, values, etc.) where there are no right answers. The statistics that we present are used primarily with skills or performance items. 2

3 To carry out the analysis of the items should be available : – A data matrix with the subjects' responses to each items. To analyze test scores and the responses to the correct alternative, the matrix will take the form of ones (hits) and zeros (mistakes). To analyze incorrect alternatives, in the matrix should appear specific options selected by each subject. The correct alternative analysis (which is offered more information about the quality of the test) allow us to obtain the index of difficulty, discrimination, and reliability and validity of the item. 1. Introduction 3

4 Empirical difficulty of an item: proportion of subjects who answer it correctly. Discriminative power: the ability of the item to distinguish subjects with different level in the trait measured. Both statistics are directly related to the mean and variance of total test scores. The reliability and validity of the items are related to the standard deviation of the test and indicate the possible contribution of each item to the reliability and validity of total scores of the test. 1. Introduction 4

5 Proportion of subjects who have responded correctly to the item: – One of the most popular indices to quantify the difficulty of the items dichotomous or dichotomized. The difficulty thus considered is relative because it depends on: – Number of people who attempt to answer the item. – Their characteristics. A: number of subjects who hits the item. N: number of subjects that attempt to respond the item. It ranges between 0 and 1. – 0: No subject has hit the item. It is difficult. – 1: All subjects have answered correctly to the item. It is easy. 2. Items difficulty 5

6 Example: A performance item in mathematics is applied to 10 subjects, with the following result : The obtained value does not indicate whether the item is good or bad. It represents how hard it has been to the sample of subjects who have attempted to answer it. Subjectabcdefghij Answers1111010110 6

7 2. Items difficulty The ID is directly related to the mean and variance of the test. In dichotomous items: – The sum of all scores obtained by subjects in this item is equal to the number of hits. Therefore, the item difficulty index is equal to its mean. If we generalize to the total test, the average of the test scores is equal to the sum of the difficulty indices of items. 7

8 2. Items difficulty The relationship between difficulty and variance of the test is also direct. In dichotomous items: In item analysis, a relevant question is to find the value of p j that maximizes the variance of items. – Maximum variance is achieved by an item when its p j is 0.5 – An item is appropriate when it is answered by different subjects and causes in them different answers. 8

9 2. Items difficulty 2.1. Correction of hits by chance The fact of hitting an item depends not only on the subjects know the answer, but also of subjets’ luck who without know it they choose the correct answer. The higher the number of distractors less likely that subjects hit the item randomly. It is advisable to correct the ID: 9 (negative values can be found)

10 2. Items difficulty 2.1. Correction of hits by chance SubjectsItem 1Item 2Item 3Item 4Item 5 A11111 B10101 C11010 D10010 E01011 F10010 G01110 H10010 I11000 J00011 ID IDc Example. Test composed by items with 3 alternatives Calculate ID and IDc to each item 10

11 2. Items difficulty 2.1. Correction of hits by chance SubjectsItem 1Item 2Item 3Item 4Item 5 A11111 B10101 C11010 D10010 E01011 F10010 G01110 H10010 I11000 J00011 ID0.70.50.30.80.4 IDc0.550.25-0.050.70.1 Items which have suffered a major correction are those that have proved more difficult. 11

12 2. Items difficulty 2.1. Correction of hits by chance Recommendations That items with extreme values ​​to the population that they are targeted in the index of difficulty are eliminated from the final test. In aptitude test we will get better psychometric results if the majority of items are of medium difficulty. – Easy items should be included, preferably at the beginning (to measure less competent subjects). – And difficult items too (to measure less competent subjects). 12

13 3. Discrimination Logic: Given an item, subjects with good test scores have been successful in a higher proportion than those with low scores. If an item is not useful to differentiate between subjects based on their skill level (does not discriminate between subjects) it should be deleted. 13

14 3. Discrimination 3.1. Index of discrimination based on extreme groups (D) It is based on the proportions of hits among skill extreme groups (25 or 27% above and below from the total sample). – The top 25 or 27% would be formed by subjects who scored above the 75 or 73 percentile in the total test. Once formed the groups we have to calculate the proportion of correct answers to a given item in both groups and then apply the following equation: 14

15 3. Discrimination 3.1. Index of discrimination based on extreme groups (D) D index ranges between -1 and 1. – 1= when all people in the upper group hit the item and those people from the lower group fail it. – 0= item is equally hit in both groups. – Negative values= less competent subjects hit the item more than the most competent subjects (the item confuses to more skilled subjects). 15

16 Example. Answers given by 370 subjects at 3 alternatives (A, B, C) of an item where B is the correct option. In rows are the frequency of subjects who selected each alternative and have received scores above and below the 27% of their sample in the total test, and the group formed by the central 46%. Calculate the corrected index of difficulty and the discrimination index. AB*C Upper 27%195328 Intermediete 46%527048 Lower 27%651916 3. Discrimination 3.1. Index of discrimination based on extreme groups (D)

17 Proportion of correct answers: (53+70+19)/370=0.38 Proportion of mistakes: 228/370=0.62 17 3. Discrimination 3.1. Index of discrimination based on extreme groups (D)

18 Interpretation of D values (Ebel, 1965) ValuesInterpretation D ≥ 0.40The item discriminates very well 0.30 ≤ D ≤ 0.39The item discriminates well 0.20 ≤ D ≤ 0.29The item discriminates slightly 0.10 ≤ D ≤ 0.19The item needs revision D < 0.10The item is useless 18 3. Discrimination 3.1. Index of discrimination based on extreme groups (D)

19 3. Discrimination 3.2. Indices of discrimination based on the correlation If an item discriminate adequately, the correlation between the scores obtained by subjects in that item and the ones obtained in the total test will be positive. – Subjects who score high on the test are more likely to hit the item. Def.: correlation between subjects scores in the item and their scores in the test (Muñiz, 2003). The total score of subjects in the test will be calculated discounting the item score. 19

20 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.1. Correlation coefficient Φ When test score and item scores are strictly dichotomous. It allow us to estimate the discrimination of an item with some criterion of interest (eg. fit and unfit, sex, etc.). First, we have to sort data in a 2x2 contingency table. – 1= item is hit/criterion is exceeded. – 0= item is fail/criterion is not exceeded. 20

21 Example. The following table shows the sorted results from 50 subjects who took the last psychometrics exam. Item 5 (X) 10 Criterion (Y)Fitp xy 30/50=0.6 5p y 35/50=0.7 Not fit510q y 15/50=0.3 p x 35/50=0.7 q x 15/30=0.3 N=50 21 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.1. Correlation coefficient Φ

22 There is a high correlation between the item and the criterion. That is, those subjects who hit the item usually pass the psychometrics exam. 22 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.1. Correlation coefficient Φ

23 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.2. Point-biserial correlation When the item is a dichotomous variable and the test score is continuous. = Mean in the test of participants that answered the item correctly. = Mean of the test. S X = Standard deviation of the test. p = Proportion of participants that answered the item correctly. q = proportion of participants that answered the item incorrectly. Remove the item score from the test score. 23

24 Example. The following table shows the responses of 5 subjects to 4 items. Calculate the point-biserial correlation of the second item. Items Participants1234 A0101 B1101 C1111 D0001 E1110 24 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.2. Point-biserial correlation

25 ItemsTotal Participants1234X(X-i)(X-i) 2 A0101211 B1101324 C1111439 D0001111 E1110324 ∑919 25 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.2. Point-biserial correlation

26 Participants who answered correctly the item are A, B, C and E; so their mean is: The total mean is: The standard deviation of the test is: 26 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.2. Point-biserial correlation

27 27 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.2. Point-biserial correlation

28 When both item and test score are inherently continuous variables, although one is dichotomized (the item). y = height in the normal curve corresponding to the typical score that leaves beneath a probability equal to p (see table). We can find values ​​greater than 1, especially when one of the variables is not normal. Example. Based on the table of the previous example, calculate the biserial correlation of item 3. 28 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.3. Biserial correlation

29 ItemsTotal Participants1234X(X-i)(X-i) 2 A0101224 B1101339 C1111439 D0001111 E1110324 ∑1127 29 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.3. Biserial correlation

30 Participants who answered correctly the item are C and E; so their mean is: The total mean is: The standard deviation of the test is: 30 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.3. Biserial correlation

31 31 3. Discrimination 3.2. Indices of discrimination based on the correlation 3.2.3. Biserial correlation Because the value p=0.4 does not appear in the first column of the table, we look for its complement have to be looked up (0.6), which is associated with an y=0.3863.

32 3. Discrimination 3.3. Discrimination in attitude items There are no right or wrong answers but the participant must be placed in the continuum established based on the degree of the measured attribute. Correlation between item scores and test scores. – Because items are not dichotomous, Pearson correlation coefficient is used. That coefficient can be interpreted as a Homogeneity Index (HI). It indicates how much the item is measuring the same dimension or attitude as the rest of the items of the scale. 32

33 Items in which HI is below 0.20 should be eliminated. Correction: deduct from the total score the item score or apply the formula below: 33 3. Discrimination 3.3. Discrimination in attitude items

34 Example. The table below presents the answers of 5 people to 4 attitudes items. Calculate the discrimination of item 4 by Pearson correlation. ItemsTotal X T X4XTX4XT X24X24 X2TX2T SubjectsX1X2X3X4 A244313399169 B3435157525225 C524314429196 D3524145616196 E4525168025256 2072292841042 34 3. Discrimination 3.3. Discrimination in attitude items

35 The correlation or IH between item 4 and total score of the test will be: Inflated result because item 4 score is included in total score. Correction: – Standard deviation both for item 4 and total score: 35 3. Discrimination 3.3. Discrimination in attitude items

36 The big difference when applying the correction is due to the small number of items that we have used in the example. – As the number of items increases, that effect decreases because the influence of item scores on the total score is getting smaller. With more than 25 items, the result is very close. 36 3. Discrimination 3.3. Discrimination in attitude items

37 Other procedure: – Useful but less efficient than the previous because it does not use the entire sample. – Determine whether the item mean for the subjects with higher scores on the total test is statistically higher than the mean of those with lower scores. It is common to use 25% or 27% of subjects with best and worst scores. – Once the groups are identified, we calculate if the mean difference is statistically significant by Student T test. – Ho: means in both groups are equal. 37 3. Discrimination 3.3. Discrimination in attitude items

38 = mean of the scores obtained in the item by the 25% of the participants that obtained the highest scores in the test. = mean of the scores obtained in the item by the 25% of the participants that obtained the lowest scores in the test. = variance of the scores obtained in the item by the 25% of the participants that obtained the highest scores in the test. = variance of the scores obtained in the item by the 25% of the participants that obtained the lowest scores in the test. n u and n l = number of participants in the upper and the lower group respectively. – Conclusions: T≤T (α,nu+nl-2) – Null hypothesis is accepted. There are not statistical differences between means. The item does not discriminate adequately. T≤T (α,nu+nl-2) – Null hypothesis is rejected. There are statistical differences between means. The item discriminates adequately. – Student T test is used when the scores in the item and the scale are distributed normally, and their variances are equal. If some of these assumptions are violated, a non-parametric test should be used (e.g., Mann-Whitney U). 38 3. Discrimination 3.3. Discrimination in attitude items

39 Exercise: using the data presented in the last example, calculate Student T test for item 2 (α=0.05). To calculate the discrimination of item 2 by Student T Test, we have to do groups with extreme scores. Because of didactic reasons, we are going to use just 2 participants to form those groups. ParticipantsX2X2 Upper groupE (16)5 B (15)4 ParticipantsX2X2 Lower groupA (13)4 C (14)2 39 3. Discrimination 3.3. Discrimination in attitude items

40 ParticipantsX2X2 Upper groupE (16)525 B (15)416 ∑941 ParticipantsX2X2 Lower groupA (13)416 C (14)24 ∑620 40 3. Discrimination 3.3. Discrimination in attitude items

41 41 3. Discrimination 3.3. Discrimination in attitude items One tail: T (α,nu+nl-2) = T (0.05, 2+2-2) = T (0.05, 2) = 2.92 1.9 < 2.92 – Null hypothesis is accepted. There are not statistical differences between means. The item does not discriminate adequately.

42 Relation between test variability and item discrimination: If the test is composed by dichotomous items: To maximize the discriminative ability of one test, we have to consider together both the difficulty (p j ) and the discrimination (r jx ) of its items. – It is achieved when discrimination is maximun (r jx =1) and the difficulty is medium (p j =0.5). 42 3. Discrimination 3.4. Factors that affect the discrimination 3.4.1. Variability

43 An item reaches its maximum discriminative power when its difficulty is medium. 43 3. Discrimination 3.4. Factors that affect the discrimination 3.4.2. Item difficulty Item discrimination Item difficulty

44 When we are constructing a test, usually we try to measure one single construct (unidimensionality). In multidimensional tests, item discrimination should be estimated considering only the items that are associated with each dimension. 44 3. Discrimination 3.4. Factors that affect the discrimination 3.4.3. Dimensionality of the test

45 If discrimination is defined as the correlation between scores obtained by participants in the item and the test, then reliability and discrimination are closely related. It is posible to express the Cronbach alpha coefficient from the discrimination of items: Small values ​​in item discrimination are typically associated with unreliable tests. 45 3. Discrimination 3.4. Factors that affect the discrimination 3.4.4. Test reliability

46 As the mean discrimination of the test increases, so does the reliability coefficient. 46 3. Discrimination 3.4. Factors that affect the discrimination 3.4.4. Test reliability Reliability coefficient KR 21 Mean discrimination

47 4. Indices of reliability and validity of the items 4.1. Reliability index To quantify the degree in which an item is measuring accurately the attribute of interest. S j = Standard deviation of the scores in the item. D j = Discrimination index of the item. When any correlation coefficient is used to calculate the discrimination of items, 47

48 4. Indices of reliability and validity of the items 4.1. Reliability index To the extent that we select items with higher RI, the better the reliability of the test will be. Highest possible value of RI = 1. Example: Having the information presented in the table below, calculate the RI of item 4. 48 pr bp Item 40.470.5

49 4. Indices of reliability and validity of the items 4.1. Reliability index 49

50 The validity of an item involves the correlation of the scores obtained by a sample of participants in the item with the scores obtained by the same subjects in any external criterion of our interest. – It serves to determine the degree in which each item of one test contributes successfully to make predictions about that external criterion. In the case that the criterion is a continuous variable and the item is a dichotomous variable, we are going to use the point- biserial correlation; but it is not necessary to substract from the total score of the external criterion the item score because it is not included. 50 4. Indices of reliability and validity of the items 4.2. Validity index

51 Test validity (r xy ) can be expressed in connexion with the VI of the items. The higher VI of the items are, the more optimized the validity of the test will be. This formula allows us to see how the validity of the test can be estimated from the discrimination index of each item (r jx ), their validity indexes (r jy ) and their difficulty indexes ( ). 51 4. Indices of reliability and validity of the items 4.2. Validity index

52 Paradox in the selection of items: if we want to select items to maximize the reliability of the test we have to choose those items with a high discrimination index (r jx ), but this would lead us to reduce the validity of the test (r xy ) because it increases as validity indexes (VI) are high and reliability indexes (RI) are low. 52 4. Indices of reliability and validity of the items 4.2. Validity index

53 Example. The table below presents the scores of 5 participants in a test with 3 items. Calculate the validity index of the test (r xy ). 53 4. Indices of reliability and validity of the items 4.2. Validity index ParticipantsItem 1Item 2Item 3 A001 B111 C100 D111 E111 r jy 0.20.40.6

54 54 4. Indices of reliability and validity of the items 4.2. Validity index

55 55 4. Indices of reliability and validity of the items 4.2. Validity index 12 3X(X-it1)(X-it2)(X-it3)(X-it1) 2 (X-it2) 2 (X-it3) 2 A0011110110 B1113222444 C1001011011 D1113222444 E1113222444 r jy 0.20.40.6Σ=7Σ=8Σ=7Σ=13Σ=14Σ=13

56 56 4. Indices of reliability and validity of the items 4.2. Validity index

57 57 4. Indices of reliability and validity of the items 4.2. Validity index

58 58 4. Indices of reliability and validity of the items 4.2. Validity index

59 5. Analysis of distractors It involves investigating in the distribution of subjects across the wrong alternatives (distractors), in order to detect possible reasons for the low discrimination of any item or see that some alternatives are not selected by anyone, for example. In this analysis, the first step implies: – To check that all the incorrect options are chosen by a minimum number of subjects. If possible, they should be equiprobable. Criteria: each distractor have to be selected by at least the 10% of the sample and there is not many difference between them. – That performance on the test of subjects who have selected each incorrect alternative is less than the performance of subjects that have selected the correct one. – It is expected that as the skill level of subjects increases, the percentage of those who select incorrect alternatives decrease and vice versa. 59

60 5. Analysis of distractors 5.1. Equiprobability of distractors Distractors are equiprobable if they are selected by a minimum of participants and if they are equally attractive to those who do not know the correct answer. Χ 2 Test: E i = Expected (theoretical) frequency. O i = Observed frequency. 60

61 5. Analysis of distractors 5.1. Equiprobability of distractors Degrees of freedom: K -1 (K = number of incorrect alternatives). Ho: E i = O i (in the participants that do not know the correct answer, the election of any distractor is equally attractive). Conclusion: – → The null hypothesis is accepted. The distractors are equally attractive. – → The null hypothesis is rejected. The distractors are not equally attractive. 61

62 Example. Determine if the incorrect alternatives are equally attractive (α=0.05). AB*C Number of answers13614292 62 5. Analysis of distractors 5.1. Equiprobability of distractors

63 To be equiprobable, each distractor should be selected by 114 participants. 63 5. Analysis of distractors 5.1. Equiprobability of distractors

64 8.49>3.84 → The null hypothesis is rejected. Incorrect alternatives are not equally attractive to all subjects, although they met the criterion of being selected by a minimum of 10% of the total sample (N). 64 5. Analysis of distractors 5.1. Equiprobability of distractors

65 5. Analysis of distractors 5.2. Discriminative power of distractors It is expected from a good distractor that its correlation with test scores is negative. To quantify the discriminative power of incorrect alternatives we use the correlation. Depending on the kind of variable, we will use biserial, biserial-point, phi or Pearson. 65

66 Example. Answer of 5 subjects to 4 items. Brackets show the alternatives selected by each subject and the correct alternative with an asterisk. Calculate discrimination of distractor b in item 3. ItemsTotal Subjects1(a*)2(b*)3(a*)4(c*)X(X-i) A0 (b)1 122 B11 133 C111143 D0 (c)0 (a)0 (b)111 e111 32 66 5. Analysis of distractors 5.2. Discriminative power of distractors

67 Subjets who have selected alternative b (the incorrect one) in item 3 have been A, B and D. The mean of these subjects in the test after eliminating the analyzed item score is: Total mean of the test resting from the scores obtained by subjects, the score of item 3: Standard deviación of scores corresponding to (X-i): The proportion of subjects that have hit the item is 2/5=0.4; and the proportion of subjects that have failed is 3/5=0.6. 67 5. Analysis of distractors 5.2. Discriminative power of distractors

68 The point-biserial correlation between the incorrect alternative “b” and test scores discounting the item score is: – As the incorrect alternative in the score of these score subjects in the item is 0, no need to remove anything from the total test. The distractor discriminates in the opposite direction than the correct alternative. It's a good distractor. 68 5. Analysis of distractors 5.2. Discriminative power of distractors

69 Visual inspection of the distribution of subject answers to the various alternatives. – Proportion of subjects that have selected each option: p – Test mean of subjects that have selected each alterative: mean – Discrimination index of all options: r bp ABC* Skill levelHigh202555 Low403525 Statisticsp0.280.50.22 Mean5109 r bp -0.200.180.29 69 5. Analysis of distractors 5.2. Discriminative power of distractors

70 Positive discrimination index: the correct alternative is mostly chosen by competent subjects. Distractor A: – Has been selected by an acceptable minimum of subjects (28%) and is selected by subjects less competent in a higher proportion. – The test mean of subjects who have selected it is less than the test mean of subjects that have selected the correct alternative (consistent with its negative discrimination index). 70 5. Analysis of distractors 5.2. Discriminative power of distractors

71 Distractor B should be revised: – It is chosen as correct by the subjects with better scores in the test. – It has been the most selected (50%), its discrimination is positive, and the mean of the subjects that have selected it is higher than one of subjects who have chosen the correct alternative. 71 5. Analysis of distractors 5.2. Discriminative power of distractors

72 In distractors analysis we still can go further and use statistical inference. The test mean of subjects that choose the correct alternative should be higher than the test mean of subjects that have chosen each distractor. – ANOVA. IV or factor: each item with as many levels as answer alternatives. DV: the raw score of subjects in the test. 72 5. Analysis of distractors 5.2. Discriminative power of distractors


Download ppt "Items analysis 1. 1. Introduction Items can adopt different formats and assess cognitive variables (skills, performance, etc.) where there are right and."

Similar presentations


Ads by Google