Presentation is loading. Please wait.

Presentation is loading. Please wait.

Towards Collaborative Scale. 20 million minds foundation.

Similar presentations


Presentation on theme: "Towards Collaborative Scale. 20 million minds foundation."— Presentation transcript:

1 Towards Collaborative Learning @ Scale

2 20 million minds foundation

3 MOOC Drawbacks  Retention  Learning (?)  Isolation (?)

4 Collaborative Learning “Quick Thinks” Structured Groups

5 Active & Peer Learning: The Evidence (Large Courses)  Pausing frequently during lecture for 2 minute discussions leads to better comprehension (1-2 grade points higher)  [Ruhl et al, Jrnl Teacher Ed. 1987]  A meta-analysis over 60 physics courses and 6,500 students found improvements of almost 2 std.dev.  [Hake, Am. J. Physics, 1998]  Controlled experiment with > 500 physics students found improved attendance, engagement, and more than twice the learning.  [Deslauries et al., Science 2011]

6 Active & Peer Learning: The Evidence (Large Courses) Even if no one in the group knows the answer, discussing improves results (genetics) [Smith et al, Science 323, Jan 2, 2009]

7 Peer Learning Example  From Deslauries et al:  Pre-class reading assignments and quizzes  (CQ) In-class clicker questions with student- student discussion  (GT) Small-group active learning tasks  Turn in individual written response  (IF) Targeted in-class instructor feedback  Typical schedule for 50-min class:  CQ1, 2 min; IF, 4 min.  CQ2, 2 min; IF, 4 min; CQ2 (continued), 3 min; IF, 5 min; Revote CQ2, 1 min.  CQ3, 3 min; IF, 6 min.  GT1, 6 min; IF with a demonstration, 6 min; GT1 (continued), 4 min; and IF, 3 min.

8 Results for Controlled Experiment From Deslauries et al., for a one-week intervention

9 Peer Learning (Smaller Classes)

10 Peer Learning Core Ideas  Students learn better by explaining to others  Extended group work must be structured  Must promote both:  Positive Interdependence  Individual Accountability  Group makeup:  Best if heterogeneous  Groups can change frequently

11 In-Person Course: Applied NLP

12

13

14 After 4 Weeks

15 After 12 Weeks

16 WHAT CAN BE IMPROVED? More short assignments!

17 PROJECT GOAL: MOOCS + PEER LEARNING How to do it?

18 First Step: Try MTurk  Hypothesis:  People in groups will get answers right more often than those working alone  Expectations:  The chats will be on topic  People will try to solve the problems

19 First Step: Try MTurk  Issues?  How to motivate the workers?  How to coordinate the workers?  What kinds of questions to use?  How to structure the conversation?

20 How To Motivate?  Experimental Manipulation:  If entire group gets the right answer, everyone gets a bonus  Control Group:  No mention of a bonus (no incentive for helping others)

21 MOOC Arrival Times, First Question, First Lecture

22 MOOC Arrival Times, Last Question, Last Lecture

23 Question Type: GMAT Critical Reasoning

24 System Workflow Real Time Crowdsourcing: Lasecki, et al, CSCW 2013, Bernstein et al, UIST 2011

25 Interaction: Small-Group Chat  CMC Literature suggests the affordances are appropriate  Video on next slide

26

27 Experimental Setup  226 worker sessions lasting on average 12.8 minutes.  (15.0 minutes excluding solo workers), with 169 solo workers, 25 discussions of size 2, and 73 discussions of size 3.  Each session consisted of 2 questions. 2 minutes alone, 5 minutes in discussion, 20 seconds for final answer choice  56% of the 452 attempts to answer questions were answered correctly.

28 Results  All hypotheses confirmed  Engaging in discussion leads to more correct answers.  The bonus incentive leads to more correct changed answers.  The participants have substantive discussions.  Of interest, but not a result:  More discussion is correlated with more correct answers

29 Results  138 workers (61%) kept their original choices unchanged on both questions  74 (33%) changed one answer after the discussion  14 (6%) changed both.  50% of workers who changed their answers improved their score  18% lowered their score;  86% of workers who changed both answers improved their score.

30 Results  Engaging in Discussion Leads to More Correct Answers  The mean percentage of correct responses is higher in chatrooms with more than one student (Fisher’s exact test, p < 0:01).

31 Results  Bonus Incentive Leads to More Correct Answers:  In the control condition, participants changed 33 out of 121 (27%) In the bonus condition they changed 44 out of 139 answers (32%). No significant difference (Fisher’s exact test, two-tailed p = 0.50 ).  However, among the changed answers, 14 answers (12%) changed from incorrect to correct in the control condition, while 31 (22%) changed from incorrect to correct in the bonus condition, a significant difference (Fisher’s exact test, two-tailed p < 0.04 )

32 Results  Participants have Substantive Discussions  3 independent raters, Scale of 1 to 4  73 of 98 discussions (74%) were rated 4 by all raters  80 (82%) had a median rating of 4. (Spearman’s rho=0.65)

33

34

35

36 Next Steps  Put this into MOOCs!  We have an experiment underway right now.

37 Other MOOC Projects  Forum Usage  Role of Instructor  Untangling Correlation from Causation  MOOC Instructor Dashboards

38 Thank you!


Download ppt "Towards Collaborative Scale. 20 million minds foundation."

Similar presentations


Ads by Google