Presentation is loading. Please wait.

Presentation is loading. Please wait.

PI: Christina Hendricks

Similar presentations


Presentation on theme: "PI: Christina Hendricks"— Presentation transcript:

1 PI: Christina Hendricks
Longitudinal Analysis of Peer Feedback in a Writing-Intensive Course: A Pilot Study PI: Christina Hendricks Co-PI: Jeremy Biesanz University of British Columbia-Vancouver Funded by the UBC Institute for the Scholarship of Teaching and Learning SoTL Seed Fund Festival of Learning, June 2016 Slides licensed CC-BY 4.0

2 Literature on peer feedback
Receiving peer feedback improves writing (Paulus, 1999; Cho & Schunn, 2007; Cho & MacArthur, 2010; Crossman & Kite, 2012) Giving peer feedback improves writing (Cho & Cho, 2011; Li, Liu & Steckelberg, 2010)

3 Few studies look at “dose-response curve”
GAPS: Most studies look at revisions to a single essay, not changes across different essays PFB PFB Draft 1 Draft 2 Draft 3 Few studies look at “dose-response curve” PFB PFB PFB PFB Essay 1 Essay 2 Essay 3 Essay 4 Essay …n

4 Pilot study research questions
How do students use peer comments given and received for improving different essays rather than drafts of the same essay? Are students more likely to use peer comments given and received for improving their writing after more than one or two peer feedback sessions? How many sessions are optimal? Does the quality of peer comments improve over time?

5 Interdisciplinary, full year course for first-years
Interdisciplinary, full year course for first-years 18 credits (English, History, Philosophy) Students write essays ( words) Peer feedback tutorials every week (4 students) Friedrich Nietzsche, public domain, Wikimedia Commons Jane Austen, public domain on Wikimedia Commons Osamu Tezuka, public domain on Wikimedia Commons Toni Morrison, Wikimedia Commons, licensed CC BY-SA 2.0

6 Data for pilot study 2013-2014 10 essays by 12 participants (n=120)
Comments by 3 peers on essays (n=1218) Comments by instructor (n=3291) All coded with same rubric

7 E.g., STREV 2: could use more textual evidence to support your claims
Coding Rubric Categories (plus subcategories, for 11 options) Strength of argument Organization Insight Style & Mechanics Numerical value 1: Significant problem 2: Moderate problem 3: Positive comment/praise Change for future Number of “1” comments total: 239 out of over 4000 1’s by students: 35 1’s by instructor: 204 E.g., STREV 2: could use more textual evidence to support your claims

8 Inter-coder reliability
3 coders: Daniel Munro & Kosta Prodanovic (undergrads, former Arts One) Jessica Stewart (author, editor) Fleiss’ Kappa Intra-class correlation Student comments (n=141) All categories: 0.61 (moderate) Most used categories: 0.8 (excellent) 0.96 (excellent) Essays (n=120) 0.71 (adequate) How much agreement do we observe relative to how much we would expect to see by chance? -- takes into account the frequency of the type of code occurring in the data -- some codes are more frequent, so you’d expect those to have more apparent agreement -1 to +1 0 = amount of agreement we’d expect to see by chance -1 is complete disagreement 0.6 is moderate agreement; 0.8 is substantial -- Kappa includes just the category Many of the mostly used categories have agreement in 0.8 range Reliability on degree: intra class correlation (ICC) of 0.96 -- to what extent is the average across the three raters reliable: average of all the numbers each gave—how does this correlate with the average of everyone who could possibly do this—get no benefit for adding more people -- average is 2.5 -- 1’s are pretty infrequent -- people agree on whether a 2 or a 3 (40% are 2s, 60% are 3s) Change for future

9 Looking at trends in comments over time

10 INSTRUCTOR Comments Number of 2 comments over time -.28** Strength
Style Organiz. -.04* Insight

11 STUDENT comments Number of 2 comments over time Strength Organiz.
Style -.16** Insight

12 INSTRUCTOR Comments Number of 3 comments Strength .31*** Style .19**
Organiz. .11** Insight .08** These numbers are linear trend over time, not autoregressive

13 STUDENT Comments Number of 3 comments over time Organiz. Style
Strength Insight

14 How does essay quality change over time?

15 Essay quality improves linearly b = .038 t(107) = 2.1 p = .037
Essays rated on a 7-point scale

16 More complex analyses

17 Cross-lagged panel design with auto-regressive structure
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C

18 Path A: Instructor Comments
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C What this says, basically, is that the coders’ ratings of essay quality are pretty similar to the instructor’s comments on essay quality, in these categories at least Significant relationships Ratings of 1 in Strength (-.12*) & Org. (-.23**) Ratings of 2 in Strength (-.06*) & Style (-.08*) Ratings of 3 in Str, (.11*), Insight (.35*), Style (.15*) *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

19 Path A: Student comments
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C Significant relationships Ratings of 2 in Insight (-.53*) Ratings of 3 in Organization (.13*) *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

20 Path C: instructor comments
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C Significant effects don’t show up if split out by category Comments ratings of 1 (.29**) Comments ratings of 2 (.23*) Comments ratings of 3 (.21, p=.057) *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

21 Path C: student comments
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C This could just be saying that students tend to give the same sorts of comments to the same people, but also that things aren’t changing that much from one essay to another. Significant relationships Comments rated 2 in Strength (.22*) & Style (.33**) Comments rated 3 in Style (.31*) *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

22 Path D: Student & Instructor comments
Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C Significant relationship ONLY if combine student & instructor comments, & only for comments rated 1 (all categories combined): (.05, p=.06)

23 Research question 1 How do students use peer comments given and received for improving different essays rather than drafts of same essay? Very little significant evidence of relationships in Path D No difference between comments given & received

24 Research question 2 Are students more likely to use peer comments given and received for improving their writing after more than one or two peer feedback sessions? How many sessions are optimal? No evidence that there is any change over time in path D No difference between comments given or received

25 Research question 3 Does the quality of peer comments improve over time? No evidence of change over time in path A Essay Quality Time 1 B Essay Quality Time 2 … N E A D Comments Time 1 Comments Time 2 … N C

26 Research Question 3, cont’d
Student/instructor agreement on average numerical ratings on each essay tends to go down over time (-.04**) student ratings increase at only half the rate (.16*) that instructor’s ratings increase (.33*****) *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

27 Research Question 3, cont’d
Correlations on number of comments, students & instructor No change in these relationships over time Comment value 1 Comment value 2 Comment value 3 Strength           0.23* Organization 0.21*     0.17* Insight  0.17* Style No change in this over time, though *p < .05, **p< .01, ***p< .001, ****p < *****p <.00001

28 Some conclusions Pilot study: feasible for larger sample? Yes, if:
instructors code essay quality rather than coders “chunk” essays for cross-lagged analyses have easy collection of comments

29 References Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing, Learning and Instruction. 20, Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39, Cho, K. & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education, 48, 409–426 Crossman, J. M., & Kite, S. L. (2012). Facilitating improved writing among students through directed peer review, Active Learning in Higher Education, 13, Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536. Paulus, T. M. (1999). The effect of peer and teacher feedback on student writing. Journal of Second Language Writing, 8,

30 University of British Columbia-Vancouver
Thank you! Christina Hendricks University of British Columbia-Vancouver Website: Blog: Slides available: Capitals needed underscore Slides licensed CC-BY 4.0


Download ppt "PI: Christina Hendricks"

Similar presentations


Ads by Google