Peer review systems, e.g. SWoRD [1], need intelligence for detecting and responding to problems with students’ reviewing performance E.g. problem localization.

Slides:



Advertisements
Similar presentations
Internal Assessment Your overall IB mark (the one sent to universities after the IB test) in any IB science course is based upon two kinds of assessments.
Advertisements

StEPS at EIAWhere We Are Now Paula Weir and Sue Harris Energy Information Administration, U.S. Department of Energy ICES3 Topic Contributed Session: Generalized.
Academic Quality How do you measure up? Rubrics. Levels Basic Effective Exemplary.
Key Stage 3 National Strategy Handling data: session 2.
What does multiple mean?. X2 TABLE: 2, 4, 6, 8, 10, 12, 14, 16, 18, , , … What pattern do you notice?
Elif Kongar*, Mahesh Baral and Tarek Sobh *Departments of Technology Management and Mechanical Engineering University of Bridgeport, Bridgeport, CT, U.S.A.
GETTING PUBLISHED Chapter 18.
Dissertation Writing.
Peer review in an economics writing course: the Pen is still mightier than the SWoRD Jennifer Imazeki Department of Economics
Publishing Journal Articles Simon Hix Prof. of European & Comparative Politics LSE Government Department My experience How journals work Choosing a journal.
1 / 26 Supporting Argument in e-Democracy Dan Cartwright, Katie Atkinson, and Trevor Bench-Capon Department of Computer Science, University of Liverpool,
ALEC 604: Writing for Professional Publication Week 11: Addressing Reviews/Revisions.
Linus U. Opara Office of the Assistant Dean for Postgraduate Studies & Research College of Agricultural & Marine Sciences Sultan Qaboos University Beyond.
Oct. 17, 2003HP Mobility Conference Classroom Presentation and Interaction with Tablet PCs Richard Anderson & Steve Wolfman Department of Computer Science.
Experimental Psychology PSY 433
Publishing Research Papers Charles E. Dunlap, Ph.D. U.S. Civilian Research & Development Foundation Arlington, Virginia
Ursula Wingate Department of Education and Professional Studies Embedding writing instruction into subject teaching – how to convince subject teachers?
Determining the Significance of Item Order In Randomized Problem Sets Zachary A. Pardos, Neil T. Heffernan Worcester Polytechnic Institute Department of.
Using Computational Linguistics to Support Students and Teachers during Peer Review of Writing Diane Litman Professor, Computer Science Department Senior.
Dr. Dinesh Kumar Assistant Professor Department of ENT, GMC Amritsar.
Additional Unit 2 Lecture Notes New Instructional Design Focus School of Education Additional Unit 2 Lecture Notes New Instructional Design Focus School.
Improving Learning from Peer Review with NLP and ITS Techniques (July 2009 – June 2011) Kevin Ashley Diane Litman Chris Schunn.
2015 DEE Conference Carlos Cortinhas, University of Exeter.
Natural Language Processing for Writing Research: From Peer Review to Automated Assessment Diane Litman Senior Scientist, Learning Research & Development.
MessageGrid: Providing Interactivity in a Technology-Rich Classroom Roy P. Pargas and Dhaval Shah Department of Computer Science, Clemson University
Intelligent Tutoring System for CS-I and II Laboratory Middle Tennessee State University J. Yoo, C. Pettey, S. Yoo J. Hankins, C. Li, S. Seo Supported.
circle Adding Spoken Dialogue to a Text-Based Tutorial Dialogue System Diane J. Litman Learning Research and Development Center & Computer Science Department.
Submitting Manuscripts to Journals: An Editor’s Perspective Michael K. Lindell Hazard Reduction & Recovery Center Texas A&M University.
Writing a Research Manuscript GradWRITE! Presentation Student Development Services Writing Support Centre University of Western Ontario.
Aug. 9, 2007Gehringer: Improving Course Materials … Through Peer Review … Expertiza: Improving Course Materials and Learning Outcomes through.
NIH Mentored Career Development Awards (K Series) Part 5 Thomas Mitchell, MPH Department of Epidemiology & Biostatistics University of California San Francisco.
Dakota State University.
Baseline Conclusions & Counter Claims. Counter Arguments - Rebuttals Now, address those arguments in a paragraph explaining why your position is a better.
"Writing for Researchers" Monday, July :35-3:45PM. Laurence R Weatherley– Spahr Professor of Chemical Engineering, Department of Chemical and.
Developments in Economics Education Conference MBA students and threshold concepts in Economics Dr Keith Gray, Peri Yavash & Dr Mark Bailey* Coventry University.
Using Artificial Intelligence to Support Peer Review of Writing Diane Litman Department of Computer Science, Intelligent Systems Program, & Learning Research.
Giving Your Vitae a JOLT Michelle Pilati Professor of Psychology Rio Hondo College Edward H. Perry Professor of Mechanical Engineering University of Memphis.
Publications Dr Sarah Wendt. Context PhD conferred Oct 2005, fulltime lecturer in 2006 at UniSA (40/40/20). Promoted to Senior Lecturer (2010). Social.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
David Ackerman, Associate VP Crystal Butler, Research Associate.
Natural Language Processing for Enhancing Teaching and Learning at Scale: Three Case Studies Diane Litman Professor, Computer Science Department Co-Director,
1 Summarize from: Sustainability of ERPS performance outcomes: The role of post-implementation review quality Nicolaou A. and Bhattacharya S. International.
Technical Writing: An Editor’s Perspective Michael K. Lindell Hazard Reduction & Recovery Center Texas A&M University.
Ontology-Based Argument Mining and Automatic Essay Scoring Nathan Ong, Diane Litman, Alexandra Brusilovsky University of Pittsburgh First Workshop on Argumentation.
Correcting Misuse of Verb Forms John Lee, Stephanie Seneff Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge ACL 2008.
Scientific Methodology: Background Information, Questions, Research Hypotheses, the Hypothetico- Deductive Approach, and the Test of Hypothesis BIOL457.
Speech and Natural Language Technology for Educational Applications Diane Litman Professor, Computer Science Department Senior Scientist, Learning Research.
Crystal Reinhart, PhD & Beth Welbes, MSPH Center for Prevention Research and Development, University of Illinois at Urbana-Champaign Social Norms Theory.
Teaching Peer Review of Writing in a Large First-Year Electrical and Computer Engineering Class: Comparison of Two Methods Michael Ekoniak Molly Scanlon.
How to get your research published.
Advanced Higher Computing Science
Publishing research in a peer review journal: Strategies for success
Natural Language Processing for Enhancing Teaching and Learning
Theme (i): New and emerging methods
“Grade-level” and “Competency” Portfolios
Soliciting Reader Contributions to Software Tutorials
Role of peer review in journal evaluation
The NSF Grant Review Process: Some Practical Tips
MCAS-Alt “Grade-level” and “Competency” Portfolios
Experimental Psychology PSY 433
Translation into Practice
Grant writing Session II.
Objective: To be able to evaluate an investigation
January 2019 Designing Upper Economics Electives with a significant writing component Helen Schneider The University of Texas at Austin.
Dissertation / Research project
P1 – most convincing interpretation
Strategi Memperbaiki dan Menyiapkan Naskah (Manuscript) Hasil Review
5b. Presentation for Reviewing Photos
ENG/200 RHETORIC AND RESEARCH The Latest Version // uopcourse.com
ENG/200 ENG/ 200 eng/200 eng/ 200 RHETORIC AND RESEARCH The Latest Version // uopstudy.com
Presentation transcript:

Peer review systems, e.g. SWoRD [1], need intelligence for detecting and responding to problems with students’ reviewing performance E.g. problem localization (pinpoints the location of the problem mentioned in the feedback), providing solution Peer review localization models [2, 3] has been deployed in SWoRD : Predicts comments of paper and argument diagram reviews for localization Flags not-localized comments in red Intervenes if ratio of localized comments less than a threshold of 0.5 Student reviewers response with one of two following options: REVISE: Revise their reviews and resubmit DISAGREE: Submit their reviews without revision Student can resubmit multiple times. All resubmissions go through the same localization prediction procedure. Introduction of system setting References 1.K. Cho and C. D. Schunn (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers and Education, 48(3), H. Nguyen and D. Litman (2013). Identifying Localization in Peer Reviews of Argument Diagrams. In Proceedings of the 16th international conference on Artificial Intelligence in Education (AIED), pp Springer-Verlag. 3.W. Xiong and D. J. Litman (2010). Identifying Problem Localization in Peer-Review Feedback. 10th International Conference on Intelligent Tutoring System, Proceedings, Comment level for prediction performance: significantly outperform the majority baselines. Submission level for intervention accuracy (w.r.t student reviewer perspective) A reviewer thinks the system incorrectly intervened when all of his comments were localized. Correct intervention when at least one comment is human-labeled Not-localized. Results: Only one incorrect intervention of diagram review. Q1: Localization model performance Table 3. Localization prediction performance at the comment level. Diagram reviewPaper review AccuracyF-measureKappaAccuracyF-measureKappa Baseline61.5% %0.340 Model81.7% % Sample comments of an argument diagram reviewSample comments of a paper review System scaffolded feedback of argument diagram reviewSystem scaffolded feedback of paper review Student possible responses: REVISE (left) and DISAGREE (right) Examples of student comments and system intervention Student reviews of argument diagrams and papers from Research Method course in psychology Questions of interest: Q1: How well did system predict feedback localization? Q2: How did student reviewers respond to system intervention? Q3: How did system scaffolding impact reviewer revisions? Data of interest: Intervened submissions and subsequent resubmissions (if any). Resubmissions occurred after a non-scaffolded submissions. Peer review corpus and research goal Table 1. Peer review data statistics. All resubmissions are counted. Diagram reviewPaper review Reviewers/Authors181/185167/183 Submitted reviews Intervened submissions17351 Table 2. Localization annotation results. Inter-rater kappa = 0.8 Diagram reviewPaper review Localized comments Not-localized comments Table 4. Intervention accuracy. Diagram reviewPaper review Total interventions17351 Incorrect interventions10 Despite system’s high intervention accuracy: student reviewers disagreed more than they agreed with system scaffolded feedback. Diagram review: 52% DISAGREE v. 48% REVISE (of 113 first interventions) Paper review: 70% DISAGREE v. 30% REVISE (of 43 first interventions) Student disagreement is not related to how well the original reviews were localized. Q2: Student response analysis (to intervention of first submissions) Classify intervention into scopes Scope=IN: intervened at current diagram/paper Scope=OUT: intervened at some prior diagram/paper but not current diagram/paper Scope=NO: never received intervention For resubmissions of each scope, collect edited comments and compare true localization labels {Yes/No} to that of prior comments. Scope=IN, No→No contributes the largest portion: potential improvement in our scaffolding of review localization. Scope=OUT, relatively large numbers of No→Yes suggest that impact of scaffolding feedback remains in later reviewing sessions. Scope=NO, large numbers of Yes→Yes may show that those reviewers might revise their reviews for some reason other than localization. Q3: Review revision analysis (to intervention of first submissions) Enhanced a peer review system by integrating two review localization models and implementing scaffolding intervention. Model performance was high at both comment level and submission level. Scaffolding feedback helped student localize their comments, even in later non-scaffolded review sessions. In future we aim to address current limitations: Large number of unsuccessful attempts to localize comments: improve user interface, highlight localization text Large number of student disagreements: user study Conclusions and future work Table 5. Comment change patterns by intervention scopes. Diagram reviewPaper review ScopeINOUTNOINOUTNO No → Yes Yes → Yes No → No Yes → No * This research is supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A to the University of Pittsburgh, and also by NFS Classroom Evaluation of a Scaffolding Intervention for Improving Peer Review Localization * Huy V. Nguyen 2, Wenting Xiong 1,2, and Diane J. Litman 1,2 1 Learning Research and Development Center | 2 Department of Computer Science, University of Pittsburgh