Andrew J. Mason 1, Brita Nellermoe 1,2 1 Physics Education Research and Development University of Minnesota, Twin Cities, Minneapolis, MN 2 University.

Slides:



Advertisements
Similar presentations
Performance Assessment
Advertisements

Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert
Victorian Curriculum and Assessment Authority
Campus-wide Presentation May 14, PACE Results.
Bridging Research, Information and Culture An Initiative of the Research and Planning Group for California Community Colleges Your Name Your Institution.
© 2012 Common Core, Inc. All rights reserved. commoncore.org NYS COMMON CORE MATHEMATICS CURRICULUM A Story of Units Taking a Look at Rigor.
The Network of Dynamic Learning Communities C 107 F N Increasing Rigor February 5, 2011.
AAER Conference Nov Symposium on Limitations of Scientific Knowledge in Educational Research: Bias, Non-Significant Findings & Knowledge Representation.
Measuring Student Learning March 10, 2015 Cathy Sanders Director of Assessment.
Making Your Assessments More Meaningful Flex Day 2015.
Supporting Teachers to make Overall Teacher Judgments The Consortium for Professional Learning.
University of Minnesota Physics Problem Solving Rubric: Testing the Rubric: Project Description: References: Students.
Applying a Simple Rubric to Assess Student Problem Solving Jennifer Docktor, Kenneth Heller University of Minnesota Jennifer Docktor, Kenneth Heller University.
1 How LRW Faculty can Contribute to Their Law School’s Assessment Plan David Thomson (University of Denver) Sophie Sparrow (University of New Hampshire)
Robust Assessment Instrument for Student Problem Solving Jennifer L. Docktor Kenneth Heller, Patricia Heller, Tom Thaden-Koch, Jun Li, Jay Dornfeld, Michael.
Robert delMas (Univ. of Minnesota, USA) Ann Ooms (Kingston College, UK) Joan Garfield (Univ. of Minnesota, USA) Beth Chance (Cal Poly State Univ., USA)
Exploring mathematical models using technology and its impact on pedagogical mathematics knowledge by Jennifer Suh George Mason University Presentation.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Data Collection and Preliminary Analysis Our survey addressed the first two of the questions presented in this study. The Qualtrics survey was framed by.
CASE STUDY IN LANGUAGE RESEARCH HELDA ALICIA HIDALGO DAVILA UNIVERSITY OF NARIÑO HUMAN SCIENCES SCHOOL LINGUISTICS AND LANGUAGES DEPARTMENT.
1 Classroom-Based Research: How to Be a Researcher in Your Classroom Basic Skills Initiative Teaching and Learning Workshop October 2009 Darla M. Cooper.
Formulating objectives, general and specific
At the end of my physics course, a biology student should be able to…. Michelle Smith University of Maine School of Biology and Ecology Maine Center for.
Principles of Assessment
Theme 2: Expanding Assessment and Evaluation for FNMI Students Goal #1: First Nations, Métis and Inuit student achievement is increased as measured by.
Using Classroom Artifacts to Measure Instructional Practice in Middle School Mathematics: A Two-State Field Test Hilda Borko, Suzanne Arnold, Beth Dorman,
Research Methods in Computer Science Lecture: Quantitative and Qualitative Data Analysis | Department of Science | Interactive Graphics System.
Exploring the use of QSR Software for understanding quality - from a research funder’s perspective Janice Fong Research Officer Strategies in Qualitative.
Collecting Quantitative Data
Finding the draft curriculum edu.au/Home.
Qualitative Argues that meaning is situated in a particular perspective or context. Different people have different perspectives and contexts. There are.
October ISIS – Cluster Moderation Assessment and moderation in CfE is a process, not an event! Aims of the morning: To further inform participants in the.
January 29, 2010ART Beach Retreat ART Beach Retreat 2010 Assessment Rubric for Critical Thinking First Scoring Session Summary ART Beach Retreat.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Protocols for Mathematics Performance Tasks PD Protocol: Preparing for the Performance Task Classroom Protocol: Scaffolding Performance Tasks PD Protocol:
Agenda Review of Last week Learn about types of Research designs –How are they different from each other? From other things? Applying what you learned.
1 Science as a Process Chapter 1 Section 2. 2 Objectives  Explain how science is different from other forms of human endeavor.  Identify the steps that.
Mapping Student Solution Rubric Categories to Faculty Perceptions Andrew J. Mason 1, Brita Nellermoe 1,2 1 Physics Education Research and Development University.
Paul Parkison: Teacher Education 1 Articulating and Assessing Learning Outcomes Stating Objectives Developing Rubrics Utilizing Formative Assessment.
1 Overview Comments on notebooks & mini- problem Teaching equitably Analyzing textbook lessons End-of-class check (Brief discussion) Introduction to multiplication.
Andrew Mason, Ph.D. Kenneth Heller, Leon Hsu, Anne Loyle-Langholz, Qing Xu University of Minnesota, Twin Cities MAAPT Spring 2011 Meeting St. Mary’s University,
Why is research important Propose theories Test theories Increase understanding Improve teaching and learning.
Quantitative SOTL Research Methods Krista Trinder, College of Medicine Brad Wuetherick, GMCTE October 28, 2010.
Curriculum Report Card Implementation Presentations
Illustration of a Validity Argument for Two Alternate Assessment Approaches Presentation at the OSEP Project Directors’ Conference Steve Ferrara American.
Knowledge Representation of Statistic Domain For CBR Application Supervisor : Dr. Aslina Saad Dr. Mashitoh Hashim PM Dr. Nor Hasbiah Ubaidullah.
Rubric The instrument takes the form of a grid or rubric, which divides physics problem solving into five sub-skill categories. These sub-skills are based.
VALUE/Multi-State Collaborative (MSC) to Advance Learning Outcomes Assessment Pilot Year Study Findings and Summary These slides summarize results from.
An Analysis of Three States Alignment Between Language Arts and Math Standards and Alternate Assessments Claudia Flowers Diane Browder* Lynn Ahlgrim-Delzell.
The Peer Review Process in graduate level online coursework. “None of us is as smart as all of us” Tim Molseed, Ed. D. Black Hills State University, South.
Selecting Appropriate Assessment Measures for Student Learning Outcomes October 27, 2015 Cathy Sanders Director of Assessment Office of Assessment and.
Major Science Project Process A blueprint for experiment success.
Developing a Useful Instrument to Assess Student Problem Solving Jennifer L. Docktor Ken Heller Physics Education Research & Development Group
Section 1. Qualitative Research: Theory and Practice  Methods chosen for research dependant on a number of factors including:  Purpose of the research.
Balancing on Three Legs: The Tension Between Aligning to Standards, Predicting High-Stakes Outcomes, and Being Sensitive to Growth Julie Alonzo, Joe Nese,
Rubrics: Using Performance Criteria to Evaluate Student Learning PERFORMANCE RATING PERFORMANCE CRITERIABeginning 1 Developing 2 Accomplished 3 Content.
{ Focus Groups An Assessment Tool in Student Affairs Image Retrieved from:
What is Research Design? RD is the general plan of how you will answer your research question(s) The plan should state clearly the following issues: The.
Inter-Rater or Inter- Observer Reliability. Description Is the extent to which two or more individuals (coders or raters) agree. Inter- Rater reliability.
EVALUATING EPP-CREATED ASSESSMENTS
Jovana Milosavljevic Ardeljan PEDAGOGICAL IMPLICATIONS
CRITICAL CORE: Straight Talk.
Victorian Curriculum: F-10 Geography
General Education Assessment
Director, Institutional Research
Robust Assessment Instrument for Student Problem Solving
Qing Xu, Ken Heller, Leon Hsu, Andrew Mason, Anne Loyle-Langholz
Sociology Outcomes Assessment
Shazna Buksh, School of Social Sciences
Physics Problem Solving Rubric: Interrater Reliability:
Presentation transcript:

Andrew J. Mason 1, Brita Nellermoe 1,2 1 Physics Education Research and Development University of Minnesota, Twin Cities, Minneapolis, MN 2 University of St. Thomas, St. Paul, MN MAAPT-WAPT Joint Meeting October 29-30, 2010

 Background Faculty Perceptions of Student Problem Solving 1 Rubric for Student Problem Solutions 2 Why are we looking at these two together?  Analysis Artifact categorization activity How to analyze it?  Results Classification of index cards Classification of chosen categories Future work  1 Yerushalmi et al. 2007, Henderson et al  2 Docktor 2009

 Purpose: to find out what instructors believe and value about teaching and learning problem solving Concept map models illustrate what faculty are looking for  Can inform curriculum developers so that they can be consistent with instructor beliefs

 Interviews carried out with 30 faculty different representation: Community Colleges (CC), Private Colleges (PC), State Universities (SU), Research University (RU) 2 interviewers – method consistent  Participants studied problem statements, student artifacts and faculty artifacts Generated their own artifacts, i.e. conversation topics  Then categorized their own artifacts as the last task This is the task we are interested in

 Focus: minimum set of problem solving categories by which a written problem solution may be evaluated Validity, inter-rater reliability have been tested  Five such categories: Useful Description (UD) Physics Approach (PA) Specific Application of Physics (SAP) Mathematical Procedures (MP) Logical Progression (LP)

 Rubric categories can be used to examine beliefs and values expressed by faculty in interviews  Research Questions: Is it valid to use the rubric, and if so, how? What does the rubric tell us about individual cards and categories? Future: How can we weigh context of individual cards and categories?

 Can see how many artifacts can fit in each given category  Rubric is meant for problem solving - can we interpret faculty artifacts with it? Interviews focused on problem solving If at least the individual cards, and possibly faculty categories, can mostly fit into rubric categories, then tool is valid

 2 raters use rubric for categorization Establish inter-rater reliability using sample of 5 faculty distributed evenly across SU, PC, and CC Split the other 25, random selection, and categorize  Agreement established with discussion Do this for individual artifact cards and then faculty-chosen categories

 Things to look at: Overall category performance, all 30 faculty Institution type Interviewer check

 2.5% of individual faculty artifacts don’t fit into at least one rubric category (meaning 97.5% do) Examples that don’t fit: confidence, very vague ideas  Individual category focus PA has the highest percentage overall UD and SAP are second most MP and LP are focused on but not as much as the other three

- RU professors focus more heavily on PA, less heavily on UD and MP - Other institutions seem to focus roughly the same way on all categories, although PC tends to focus a bit more on LP and a bit less on UD

Inst. Type Int. 1 #Int. 2 # CC52 PC45 RU15 SU44 -Groups are mostly evenly balanced by interviewer -Only bit of difference seems to result from RU skewing (PA)

- It’s somewhat more difficult to classify overall categories

- RU categories focus more heavily on PA and SAP, none for MP - SU categories also focus heavily on PA, a bit more on UD - CC, PC are similar

- Discrepancy in faculty categories between interviewers (CBC vs. MP) - This is independent of institution type (not a result of skewing)

 Need to consider context of interviews Faculty categories can’t be matched as well as individual cards can – any further insight? Are interviewers interpreting categories differently? How to treat possible qualitative difference between value of artifacts?  e.g. overall principle vs. comment on student solution

 10 interviews sampled for context Taken from beginning, middle and end of interview period Even sampling from different interviewers, institutions All references to cards, categories marked down via Ctrl+F How to evaluate qualitative difference between items? How to relate categories to individual cards?

 For more information visit