Presentation is loading. Please wait.

Presentation is loading. Please wait.

Educational Data Mining: Discovery with Models Ryan S.J.d. Baker PSLC/HCII Carnegie Mellon University Ken Koedinger CMU Director of PSLC Professor of Human-Computer.

Similar presentations


Presentation on theme: "Educational Data Mining: Discovery with Models Ryan S.J.d. Baker PSLC/HCII Carnegie Mellon University Ken Koedinger CMU Director of PSLC Professor of Human-Computer."— Presentation transcript:

1 Educational Data Mining: Discovery with Models Ryan S.J.d. Baker PSLC/HCII Carnegie Mellon University Ken Koedinger CMU Director of PSLC Professor of Human-Computer Interaction & Psychology Carnegie Mellon University

2 In this segment… We will discuss Discovery with Models in (some) detail

3 Last time… We gave a very simple example of Discovery with Models using Bayesian Knowledge Tracing

4 Uses of Knowledge Tracing Can be interpreted to learn about skills

5 Skills from the Algebra Tutor skillL0T AddSubtractTypeinSkillIsolatepositiveIso0.01 ApplyExponentExpandExponentsevalradicalE0.3330.497 CalculateEliminateParensTypeinSkillElimi0.9790.001 CalculatenegativecoefficientTypeinSkillM0.9530.001 Changingaxisbounds0.01 Changingaxisintervals0.01 ChooseGraphicala0.0010.306 combineliketermssp0.9430.001

6 Which skills could probably be removed from the tutor? skillL0T AddSubtractTypeinSkillIsolatepositiveIso0.01 ApplyExponentExpandExponentsevalradicalE0.3330.497 CalculateEliminateParensTypeinSkillElimi0.9790.001 CalculatenegativecoefficientTypeinSkillM0.9530.001 Changingaxisbounds0.01 Changingaxisintervals0.01 ChooseGraphicala0.0010.306 combineliketermssp0.9430.001

7 Which skills could use better instruction? skillL0T AddSubtractTypeinSkillIsolatepositiveIso0.01 ApplyExponentExpandExponentsevalradicalE0.3330.497 CalculateEliminateParensTypeinSkillElimi0.9790.001 CalculatenegativecoefficientTypeinSkillM0.9530.001 Changingaxisbounds0.01 Changingaxisintervals0.01 ChooseGraphicala0.0010.306 combineliketermssp0.9430.001

8 Why do Discovery with Models? We have a model of some construct of interest or importance  Knowledge  Meta-Cognition  Motivation  Affect  Collaborative Behavior Helping Acts, Insults  Etc.

9 Why do Discovery with Models? We can now use that model to  Find outliers of interest by finding out where the model makes extreme predictions  Inspect the model to learn what factors are involved in predicting the construct  Find out the construct’s relationship to other constructs of interest, by studying its correlations/associations/causal relationships with data/models on the other constructs  Study the construct across contexts or students, by applying the model within data from those contexts or students  And more…

10 Finding Outliers of Interest Finding outliers of interest by finding out where the model makes extreme predictions  As in the example from Bayesian Knowledge Tracing  As in Ken’s example yesterday of finding upward spikes in learning curves

11 Model Inspection By looking at the features in the Gaming Detector, Baker, Corbett, & Koedinger (2004, in press) were able to see that Students who game the system and have poor learning  game the system on steps they don’t know Students who game the system and have good learning  game the system on steps they already know

12 Model Inspection: A tip The simpler the model, the easier this is to do Decision Trees and Linear/Step Regression: Easy.

13 Model Inspection: A tip The simpler the model, the easier this is to do Decision Trees and Linear/Step Regression: Easy. Neural Networks and Support Vector Machines: Fuhgeddaboudit!

14

15 Correlations to Other Constructs

16 Take Model of a Construct And see whether it co-occurs with other constructs of interest

17 Example Detector of gaming the system (in fashion associated with poorer learning) correlated with questionnaire items assessing various motivations and attitudes (Baker et al, 2008)

18 Example Detector of gaming the system (in fashion associated with poorer learning) correlated with questionnaire items assessing various motivations and attitudes (Baker et al, 2008) Surprise: Nothing correlated very well (correlations between gaming and some attitudes statistically significant, but very weak – r < 0.2)

19 Example More on this in a minute…

20 Studying a Construct Across Contexts Often, but not always, involves:

21 Model Transfer

22 Richard said that prediction assumes that the Sample where the predictions are made Is “the same as” The sample where the prediction model was made Not entirely true

23 Model Transfer It’s more that prediction assumes the differences “aren’t important” So how do we know that’s the case?

24 Model Transfer You can use a classifier in contexts beyond where it was trained, with proper validation This can be really nice  you may only have to train on data from 100 students and 4 lessons  and then you can use your classifier in cases where there is data from 1000 students and 35 lessons Especially nice if you have some unlabeled data set with nice properties  Additional data such as questionnaire data (cf. Baker, 2007; Baker, Walonoski, Heffernan, Roll, Corbett, & Koedinger, 2008)

25 Validate the Transfer You should make sure your model is valid in the new context (cf. Roll et al, 2005; Baker et al, 2006) Depending on the type of model, and what features go into it, your model may or may not be valid for data taken  From a different system  In a different context of use  With a different population

26 Validate the Transfer For example Will an off-task detector trained in schools work in dorm rooms?

27 Validate the Transfer For example Will a gaming detector trained in a tutor where {gaming=systematic guessing, hint abuse} Work in a tutor where {gaming=point cartels}

28 Validate the Transfer However Will a gaming detector trained in a tutor unit where {gaming=systematic guessing, hint abuse} Work in a different tutor unit where {gaming=systematic guessing, hint abuse}?

29 Maybe…

30 Baker, Corbett, Koedinger, & Roll (2006) We tested whether A gaming detector trained in a tutor unit where {gaming=systematic guessing, hint abuse} Would work in a different tutor unit where {gaming=systematic guessing, hint abuse}

31 Scheme Train on data from three lessons, test on a fourth lesson For all possible combinations of 4 lessons (4 combinations)

32 Transfer lesson.vs. Training lessons Ability to distinguish students who game from non- gaming students Overall performance in training lessons: A’ = 0.85 Overall performance in test lessons: A’ = 0.80 Difference is NOT significant, Z=1.17, p=0.24 (using Strube’s Adjusted Z)

33 So transfer is possible… Of course 4 successes over 4 lessons from the same tutor isn’t enough to conclude that any model trained on 3 lessons will transfer to any new lesson

34 What we can say is…

35 If… If we posit that these four cases are “successful transfer”, and assume they were randomly sampled from lessons in the middle school tutor…

36 Maximum Likelihood Estimation

37 Studying a Construct Across Contexts Using this detector (Baker, 2007)

38 Research Question Do students game the system because of state or trait factors? If trait factors are the main explanation, differences between students will explain much of the variance in gaming If state factors are the main explanation, differences between lessons could account for many (but not all) state factors, and explain much of the variance in gaming So: is the student or the lesson a better predictor of gaming?

39 Application of Detector After validating its transfer We applied the gaming detector across 35 lessons, used by 240 students, from a single Cognitive Tutor Giving us, for each student in each lesson, a gaming frequency

40 Model Linear Regression models Gaming frequency = Lesson +  0 Gaming frequency = Student +  0

41 Model Categorical variables transformed to a set of binaries i.e. Lesson = Scatterplot becomes 3DGeometry = 0 Percents = 0 Probability = 0 Scatterplot = 1 Boxplot = 0 Etc…

42 Metrics

43 r2r2 The correlation, squared The proportion of variability in the data set that is accounted for by a statistical model

44 r2r2 The correlation, squared The proportion of variability in the data set that is accounted for by a statistical model

45 r2r2 However, a limitation The more variables you have, the more variance you should be expected to predict, just by chance

46 r2r2 We should expect 240 students To predict gaming better than 35 lessons Just by overfitting

47 So what can we do?

48 Our good friend BiC Bayesian Information Criterion (Raftery, 1995) Makes trade-off between goodness of fit and flexibility of fit (number of parameters)

49 Predictors

50 The Lesson Gaming frequency = Lesson +  0 35 parameters r 2 = 0.55 BiC’ = -2370  Model is significantly better than chance would predict given model size & data set size

51 The Student Gaming frequency = Student +  0 240 parameters r 2 = 0.16 BiC’ = 1382  Model is worse than chance would predict given model size & data set size!

52 Standard deviation bars, not standard error bars

53 In this talk… Discovery with Models to  Find outliers of interest by finding out where the model makes extreme predictions  Inspect the model to learn what factors are involved in predicting the construct  Find out the construct’s relationship to other constructs of interest, by studying its correlations/associations/causal relationships with data/models on the other constructs  Study the construct across contexts or students, by applying the model within data from those contexts or students

54 Necessarily… Only a few examples given in this talk

55 An area of increasing importance within EDM…

56 In the last 3 days we have discussed (or at least mentioned) 5 broad areas of EDM Prediction Clustering Relationship Mining Discovery with Models Distillation of Data for Human Judgment

57 Now it’s your turn To use these techniques to answer important questions about learners and learning To improve these techniques, moving forward

58 To learn more Baker, R.S.J.d. (under review) Data Mining in Education. Under review for inclusion in the International Encyclopedia of Education  Available upon request Baker, R.S.J.d., Barnes, T., Beck, J.E. (2008) Proceedings of the First International Conference on Educational Data Mining Romero, C., Ventura, S. (2007) Educational Data Mining: A Survey from 1995 to 2005. Expert Systems with Applications, 33 (1), 135-146.

59 END

60 valuesabcdefghijk 0.10.317030.1847940.2926740.9684290.5990520.2587720.2888680.4796940.8459860.3128780.325583 0.20.5878820.8184680.667710.2868490.5713310.8784870.3689840.1562950.5291260.0096590.827527 0.30.0692290.6143440.0166780.6252790.072580.606440.3769060.5464820.7804560.851990.99095 0.40.1340720.7615940.456860.0755980.9022160.3496610.414520.3778480.2718170.8082680.152187 0.50.7735270.5685020.2128270.2966440.6067590.7637510.3375720.6580860.5273550.2484250.306963 0.60.3820310.9543570.469150.7931410.4229940.007780.1322190.2189460.266340.2044950.428783 0.70.4994370.3178590.569810.978220.9266540.5496370.2419340.2935750.9102870.4981850.803212 0.80.4520560.1338850.5547520.7712150.772310.8670480.3988350.3109580.7795380.759740.127566 0.90.0136960.0555950.8875050.2535490.5291210.3018570.8468780.9896240.4809560.4425410.614105 10.5048060.4620660.5964070.9864230.5350240.4756230.4509060.075880.0368260.9955230.827306

61 valuesabcdefghijk 0.10.317030.1847940.2926740.9684290.5990520.2587720.2888680.4796940.8459860.3128780.325583 0.20.5878820.8184680.667710.2868490.5713310.8784870.3689840.1562950.5291260.0096590.827527 0.30.0692290.6143440.0166780.6252790.072580.606440.3769060.5464820.7804560.851990.99095 0.40.1340720.7615940.456860.0755980.9022160.3496610.414520.3778480.2718170.8082680.152187 0.50.7735270.5685020.2128270.2966440.6067590.7637510.3375720.6580860.5273550.2484250.306963 0.60.3820310.9543570.469150.7931410.4229940.007780.1322190.2189460.266340.2044950.428783 0.70.4994370.3178590.569810.978220.9266540.5496370.2419340.2935750.9102870.4981850.803212 0.80.4520560.1338850.5547520.7712150.772310.8670480.3988350.3109580.7795380.759740.127566 0.90.0136960.0555950.8875050.2535490.5291210.3018570.8468780.9896240.4809560.4425410.614105 10.5048060.4620660.5964070.9864230.5350240.4756230.4509060.075880.0368260.9955230.827306 Real dataRandom numbers

62 num vars r2r2 10.000 20.144 30.370 40.411 50.421 60.422 70.612 80.703 91 101

63 r2r2 Nine variables of random junk successfully got an r 2 of 1 on ten data points And that’s what we call overfitting


Download ppt "Educational Data Mining: Discovery with Models Ryan S.J.d. Baker PSLC/HCII Carnegie Mellon University Ken Koedinger CMU Director of PSLC Professor of Human-Computer."

Similar presentations


Ads by Google