Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cristina Conati Department of Computer Science University of British Columbia Beyond Problem-solving: Student-adaptive Interactive Simulations for Math.

Similar presentations


Presentation on theme: "Cristina Conati Department of Computer Science University of British Columbia Beyond Problem-solving: Student-adaptive Interactive Simulations for Math."— Presentation transcript:

1 Cristina Conati Department of Computer Science University of British Columbia Beyond Problem-solving: Student-adaptive Interactive Simulations for Math and Science

2 Overview u Motivations u Challenges of devising student-adaptive simulations u Two examples of how we target these challenges –ACE: interactive simulation for mathematical functions –CSP Applet: interactive simulation for AI algorithm u Conclusions and Future work

3 Intelligent Tutoring Systems (ITS)  Create computer-based tools that support individual learners By autonomously and intelligently adapting to their specific needs Student Model Tutor Domain Model Adaptive Interventions

4 ITS Achievements u In the last 20 years, there have been many successful initiatives in devising Intelligent Tutoring Systems (Woolf 2009, Building Intelligent Interactive Tutors, Morgan Kaufman) u Mainly ITS that provide individualized support to problem solving through tutor-lead interaction (coached problem solving) –Well defined problem solutions => guidance on problem solving steps –Clear definition of correctness => basis for feedback

5 Beyond Coached Problem Solving u Coached problem solving is a very important component of learning u Other forms of instruction, however, can help learners acquire the target skills and abilities –At different stages of the learning process –For learners with specific needs and preferences u Our Goal: Extend ITS to other learning activities that support student initiative and engagement: –Interactive Simulations –Educational Games

6 Overview u Motivations u Challenges of devising student-adaptive simulations u Two examples of how we target these challenges –ACE: interactive simulation for mathematical functions –CSP Applet: interactive simulation for AI algorithm u Conclusions and Future work

7 Challenges u Activities more open-ended and less well-defined than pure problem solving –No clear definition of correct/successful behavior u Different user states to be captured (meta-cognitive, affective) in order to provide good tutorial interventions –difficult to assess unobtrusively from interaction events u How to model what the student is doing? u How to provide feedback that fosters learning while maintaining student initiative and engagement?

8 Our Approach u Student models based on formal methods for probabilistic reasoning and machine learning u Increase information available to student model through innovative input devices: –e.g. eye-tracking and physiological sensors u Iterative model design and evaluation

9 Overview u Motivations u Challenges of devising student-adaptive simulations u Two examples of how we target these challenges –ACE: interactive simulation for mathematical functions –CSP Applet: interactive simulation for AI algorithm u Conclusions and Future work

10 ACE: Adaptive Coach for Exploration u Activities organized into units to explore mathematical functions (e.g. input/ouput, equation/plot) u Probabilistic student model that captures student exploratory behavior and other relevant traits u Tutoring agent that generates tailored suggestions to improve student exploration/learning when necessary (Bunt, Conati, Hugget, Muldner, AIED 2001)

11 Adaptive Coach for Exploration EDM 2010 11

12 12 Adaptive Coach for Exploration

13 13 Adaptive Coach for Exploration Before you leave this exercise, why don’t you try scaling the function by a large negative value? Think about how this will affect the plot

14 ACE Student Model (Bunt and Conati 2002) Knowledge Individual Exploration Cases Exploration of Exercises Exploration Categories Exploration of Units u Iterative process of design and evaluation u Probabilistic model of how individual exploration actions influence exploration and understanding of exercises and concepts e.g. (in Plot unit) positive/negative slope positive/negative intercept large/small, positive/negative exponents…

15 Modeling Student Exploration u Our first attempt (Bunt and Conati, 2002) Learning Student Model Number and Coverage of Exploratory Actions, e.g. Positive/negative Y-Intercept Odd/Even, Positive Negative Exponent.... Interface Actions

16 Preliminary Evaluation  Quasi-experimental design with 13 participants using ACE (Bunt and Conati 2002) –The more exercises were effectively explored according to the student model, the more the students improved –The more hints students followed, the more they learned Because the model only considers coverage of student actions, it can overestimate student exploration  Need to consider whether the student is reasoning about the effects of his/her actions –Self-explanation meta-cognitive skill:

17 However u When considering only coverage of exploratory actions, the model may overestimate the effectiveness of student exploration (Bunt and Conati 2002) u Need to consider whether the student is reasoning about the effects of his/her actions u Self-explanation meta-cognitive skill:

18 Revised User Model (Bunt, Muldner and Conati, ITS2004; Merten and Conati, Knowledge Based Systems 2007) Learning Student Model  Number and coverage of student actions  Self-explanation of action outcomes  Time between actions  Gaze Shifts in Plot Unit Gaze Shifts in Plot Unit Interface Actions Input from eye-tracker

19 Sample Gaze Shift

20 Results on Accuracy u We evaluated the complete model against –The original model with no self-explanation –A model that uses only time in between actions as evidence of self- explanation

21 What’s Next (1) Test adaptive interventions to trigger self-explanation (Conati 2011)

22 u Tools to scaffold the self-explanation constuction

23 Discussion u ACE work provided evidence that It is possible to track more “open ended” students’ behaviors than structured problem solving eye-tracking can support the process u However, hand-coding the relevant behaviors, as we did for ACE (knowledge-based approach) is time consuming likely to miss other, less intuitive patterns of interaction related to learning (or lack thereof)

24 Alternative Approach (Amershi and Conati 2009, Kardan and Conati 2011) Behavior Discovery Via Data Mining Association Rules Mining Clustering Actions Logs Other Data Actions Logs Other Data Fe atu re Ve cto rs Vector of Interaction Features - Frequency Of Actions - Latency Between Actions …………… Vector of Interaction Features - Frequency Of Actions - Latency Between Actions …………… Extract rules describing distinguishing patterns in each cluster Groups together students that have similar interaction behaviors Interpret in terms of learning Experts Performance Measure(s)

25 Overview u Motivations u Challenges of devising student-adaptive simulations u Two examples of how we target these challenges –ACE: interactive simulation for mathematical functions –CSP Applet: interactive simulation for AI algorithm u Conclusions and Future work

26 Tested with AI Space CSP applet u AISpace (Amershi et al., 2007) –set of applets implementing interactive simulations of common Artificial Intelligence algorithms –Used regularly in our AI courses –Google “AISpace” if you want to try it out u Applet for Constraint Satisfaction problems (CSP), visualizes the working of the AC3 algorithm

27 27 AISpace CSP Applet Direct Arc Clicking

28 28 Clustering u Algorithms that find patterns in unlabelled sets of data

29 Clustering u Algorithms that find patterns in unlabelled sets of data

30 User Study (Kardan and Conati 2011) u 65 subjects –Read intro material on the AC-3 algorithm –Pre test –Use CSP applet on two problems –Post test u 13,078 actions u More than 17 hours of interaction

31 Dataset u Features: –frequencies of use for each action –pause duration between actions (Mean and SD) –7 actions  21 features u Performance measure for validation –Learning Gain from pretest to posttest Feature vectors Clustering Behavior Discovery Rule Mining

32 u Found 2 clusters u Statistically significant difference in Learning Gains (LG) –High Learners (HL) and Low Learners (LL) clusters 32 Feature vectors Clustering Behavior Discovery Rule Mining Clustering

33 Usefulness: Sample Rules HL members: u Use Direct Arc Click action very frequently (R1). HL cluster: R1: Direct Arc Click frequency = Highest (Conf =100%, Class Cov = 100%) LL cluster: R2: Direct Arc Click Pause Avg = Lowest (Conf =100%, Class Cov = 100%) : R3: Direct Arc Click frequency = Lowest (Conf = 93%, Class Cov=93.5%) 33 LL members: Use Direct Arc Click sparsely (R3) Leave little time between a Direct Arc Click and the next action (R2) Feature vectors Clustering Behavior Discovery Rule Mining

34 Great, but what do we do with this? u We can use the learned clusters and rules to classify a new student based on her behaviors u Use detected behaviours for adaptive support –Promoting the behaviours conducive of learning –Discouraging/preventing detrimental behaviours 34

35 The User Modeling Framework 35 Association Rules Mining Clustering Feature Vector Calculation Online Classifier Adaptive Interventions Behavior Discovery User Classification Actions Logs Other Data Actions Logs Other Data F e at u re New user’s Actions New user’s Actions Vector of Interaction Features If user is a LL and uses Direct Arc Click very infrequently (R3) Then prompt this action If user is a LL and pauses very briefly after a Direct Arc Click (R2) Then take action to slow her down

36 Usefulness: Possible Interventions IF user is classified as a LL and uses Direct Arc Click very infrequently (R3) Then u give a hint to prompt this action IF user is classified as a LL and pauses very briefly after a Direct Arc Click (R2) Then u intervene to slow down the student LL cluster: R2: Direct Arc Click Pause Avg = Lowest (Conf =100%, Class Cov = 100%) : R3: Direct Arc Click frequency = Lowest (Conf = 93.%, Class Cov=93.5%) Samad Kardan is currently working on these

37 Classifier Evaluation  Leave-one-out Cross Validation on dataset of 64 users  For each user u in dataset 1.Remove user u 2.do Behaviour Discovery on the remaining 63 3.for each of u’s actions: »Calculate the feature vector u v »Classify u v »Compare with u’s original label

38 Accuracy as a function of observed actions

39 Discussion u User modeling framework for open-ended and unstructured interactions –Relevant behaviours are discovered via data mining techniques instead being hand-crafted u Very encouraging results with CSP applet –Detected clusters represent groups with different learning gains –Online classifier: good accuracy soon enough to generate adaptive interventions –These interventions can be derived from the generated rules

40 Current Work Applying the discovered rules to generate the adaptive version of the CSP applet Adding eye-tracking input to the dataset

41 Conclusions Research on devising student-adaptive didactic support for exploratory activities beyond problem solving Interactive simulations Challenges in modeling interactions with no clear structure or definition of correctness Student modeling approaches based on probabilistic techniques and unsupervised machine learning very promising results Shown how eye-tracking can help! We are also exploring it in relation to assessing engagement and attention in educational games (Muir and Conati 2011)


Download ppt "Cristina Conati Department of Computer Science University of British Columbia Beyond Problem-solving: Student-adaptive Interactive Simulations for Math."

Similar presentations


Ads by Google