Advancing Assessment of Quantitative and Scientific Reasoning Donna L. Sundre Amy D. Thelk Center for Assessment and Research Studies (CARS) James Madison.

Slides:



Advertisements
Similar presentations
Template: Making Effective Presentation about Your Evidence-based Health Promotion Program This template is intended for you to adapt to your own program.
Advertisements

Promoting Quality Child Outcomes Data Donna Spiker, Lauren Barton, Cornelia Taylor, & Kathleen Hebbeler ECO Center at SRI International Presented at: International.
Is College Success Associated With High School Performance? Elizabeth Fisk, Dr. Kathryn Hamilton (Advisor), University of Wisconsin - Stout Introduction.
Assessment Policy Overview Dwayne Holford Coordinator, Academic Affairs.
EPAS: Elevating the Postsecondary Aspirations of Students! Using ACTs EPAS Data Effectively Glenn Beer Louisiana Tech University
Writing an Effective Proposal for Innovations in Teaching Grant
Practicing Community-engaged Research Mary Anne McDonald, MA, Dr PH Duke Center for Community Research Duke Translational Medicine Institute Division of.
Developing a Statistics Teaching and Beliefs Survey Jiyoon Park Audbjorg Bjornsdottir Department of Educational Psychology The National Statistics Teaching.
New Web-Based Course Evaluation Services Available to Schools and Departments Presentation to Faculty Council November 6, 2009.
Academic Work Environment Survey 2004 Barbara Silver, ADVANCE Program Director Presented at the ADVANCE National Conference, G-Tech, Atlanta, Georgia April.
Large Scale Assessment Conference June 22, 2004 Sue Rigney U.S. Department of Education Assessments Shall Provide for… Participation of all students Reasonable.
Staar Trek The Next Generation STAAR Trek: The Next Generation Performance Standards.
The Impact of Project Based Learning on High School Biology SOL Scores Rhiannon Brownell April 1, 2008 ECI 637 ECI 637 Instructor: Martha Maurno, M.S.
Using Data to Identify Student Needs for MME Stan Masters Coordinator of Curriculum, Assessment, and School Improvement Lenawee ISD August 26, 2008.
ENHANCE Update Research Underway on the Validity of the Child Outcomes Summary (COS) Process ECO Center Advisory Board Meeting March 8, 2012 Arlington,
Engaging the Arts and Sciences at the University of Kentucky Working Together to Prepare Quality Educators.
Aligning the P-16 Curriculum to Improve Science Teacher Preparation Michael Odell, Ph.D. John Ophus, Ph.D. Teresa Kennedy, Ph.D. Jason Abbitt, Ph.D. Kristian.
Update on the State Testing Program November 14, 2011.
FAEIS Project User Opinion Survey 2005 Thursday, June 23, 2005 Washington, D.C. H. Dean SutphinYasamin Miller ProfessorDirector, SRI Agriculture & Extension.
AAHE 2004 Connecting Public Audiences to the College Experience: A Model of General Education Assessment Susan L. Davis James Madison University A. Katherine.
Advancing Assessment of Quantitative and Scientific Reasoning: A Progress Report on an NSF Project Donna L. Sundre, Douglas Hall, Karen Smith,
CCSSO Criteria for High-Quality Assessments Technical Issues and Practical Application of Assessment Quality Criteria.
Assessment in General Education: A Case Study in Scientific and Quantitative Reasoning B.J. Miller & Donna L. Sundre Center for Assessment and Research.
Evidence of Student Learning Fall Faculty Seminar Office of Institutional Research and Assessment August 15, 2012.
Assessing the Quality of Research
Standard Two: Understanding the Assessment System and its Relationship to the Conceptual Framework and the Other Standards Robert Lawrence, Ph.D., Director.
The Kansas Communities That Care Survey Survey Development.
Truman State University Kirksville, Missouri Glenn Wehner – Agriculture Ian Lindevald – Physics Philip Ryan – Mathematics Karen Smith - Psychology.
Biology Scholars Program Amy Chang American Society for Microbiology Goal: Develop faculty expertise in evidenced-based science education reform Seven.
Overview University College: Four departmentsFour departments 2950 students2950 students 150 faculty & 11 staff members150 faculty & 11 staff members.
MEAP / MME New Cut Scores Gill Elementary February 2012.
Educational Benefit Review (ED Benefit). Educational Benefit Purpose To determine if the student’s Individualized Education Program (IEP) was reasonably.
Institutional Effectiveness A set of ongoing and systematic actions, processes, steps and practices that include: Planning Assessment of programs and.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
A Comparison of General v. Specific Measures of Achievement Goal Orientation Lisa Baranik, Kenneth Barron, Sara Finney, and Donna Sundre Motivation Research.
A presentation of methods and selected results 1.
Gabriela Garcia John Briggs. Explore whether using an assessment instrument which measures non-cognitive attributes is a predictor of student success.
Surveying the Freshman Class Eva Fernández Center for Teaching & Learning September 16, 2011.
The James Madison University Story Donna L. Sundre, Professor of Graduate Psychology Executive Director Center for Assessment and Research Studies James.
Course, Curriculum, and Laboratory Improvement (CCLI) Transforming Undergraduate Education in Science, Technology, Engineering and Mathematics PROGRAM.
Challenges of Quantitative Reasoning Assessment Donna L. Sundre Center for Assessment and Research Studies
So What is Going to be Happening with State Assessment for Students with Disabilities for 2007/2008? Peggy Dutcher Fall 2007 Assessment and Accountability.
Introduction to the quality system in MOHE Prof. Hala Salah Consultant in NQAAP.
Assessment, Accreditation, and Retention. “Thriving at the Liberal Arts College: Best Practices in Operations and Research” Dr. Claire Robinson, University.
Presentation to the Nevada Council to Establish Academic Standards Proposed Math I and Math II End of Course Cut Scores December 22, 2015 Carson City,
Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.
Research And Evaluation Differences Between Research and Evaluation  Research and evaluation are closely related but differ in four ways: –The purpose.
Instrument Development and Psychometric Evaluation: Scientific Standards May 2012 Dynamic Tools to Measure Health Outcomes from the Patient Perspective.
Student Affairs: A Culture of Assessment
Assessment Basics PNAIRP Conference Thursday October 6, 2011
Further Validation of the Personal Growth Initiative Scale – II: Gender Measurement Invariance Harmon, K. A., Shigemoto, Y., Borowa, D., Robitschek, C.,
Bringing Active Learning to Scale at Bronx Community College (BCC) of the City University of New York (CUNY) Dr. Nancy Ritze August 3, 2016.
SCGR Results Spring 2016 Student Academic Achievement Committee
Reliability and Validity in Research
Improving the Accessibility of Locally Developed Assessments CCSSO National Conference on Student Assessment 2016 Phyllis Lynch, PhD Director, Instruction,
ATD: Year in Review and What to Expect in Year II
PSYCH 655 Competitive Success/snaptutorial.com
PSYCH 655 Education for Service/snaptutorial.com.
Derek Herrmann & Ryan Smith University Assessment Services
Improving the First Year: Campus Discussion March 30, 2009
Rory McFadden, Gustavus Adolphus College
Surveying the Freshman Class
NAEP and International Assessments
Assessing Academic Programs at IPFW
EDUCAUSE MARC 2004 E-Portfolios: Two Approaches for Transforming Curriculum & Promoting Student Learning Glenn Johnson Instructional Designer Penn State.
The Impact of Project Based Learning on High School Biology SOL Scores
Chapter 8 VALIDITY AND RELIABILITY
Discussion on Virginia’s State Assessment Program
SCGR Results Spring 2016 Student Academic Achievement Committee
Presentation transcript:

Advancing Assessment of Quantitative and Scientific Reasoning Donna L. Sundre Amy D. Thelk Center for Assessment and Research Studies (CARS) James Madison University

Overview of talk Current NSF Research project History of the test instrument Phase I: Results from JMU Phase II: Future directions Results from some of our partners: Michigan State Truman State Virginia State

Current NSF Project 3-year grant funded by National Science Foundation: “Advancing assessment of scientific and quantitative reasoning” Hersh & Benjamin (2002) listed four barriers to assessing general education learning outcomes:  confusion;  definitional drift;  lack of adequate measures, and  misconception that general education cannot be measured This project addresses all of these concerns with special emphasis on the dearth of adequate measures

Objective of NSF project Exploring the psychometric quality and generalizability of JMU’s Quantitative and Scientific Reasoning instruments to institutions with diverse missions and serving diverse populations.

Partner Institutions Virginia State University: State-supported; Historically Black institution Michigan State University: State-supported; Research institution Truman State University: State-supported; Midwestern liberal arts institution St. Mary’s University (Texas): Independent; Roman-Catholic; Hispanic Serving institution

Project phases Phase I: First Faculty institute (conducted July 2007 at JMU); followed by data collection, identification of barriers, and reporting of results Phase II: Validity studies (to be developed and discussed during second faculty institute, July 2008), dissemination of findings and institutional reports

History of the instrument Natural World test, developed at JMU, currently in 9 th version Successfully used for assessment of General Education program effectiveness in scientific and quantitative reasoning Generates two subscores: SR and QR Summary of results since 2001 Table of Results -- 5 Test Versions.doc

Adaptation of an instrument JMU instrument has been carefully scrutinized for over 10 years The QR and SR is currently administered at over 25 institutions across the nation NSF decided to fund this CCLI project to further study procedures for adoption and adaptation of instruments and assessment models

Evaluating the generalizability of the instrument

Step 1: Mapping Items to Objectives Relating test items to stated objectives for each institution  In the past back translation method was used (Dawis, 1987)..\..\JMU\NSF Grant\Truman\Blank ObjectiveGrid_truman.doc..\..\JMU\NSF Grant\Truman\Blank ObjectiveGrid_truman.doc  Participants at the NSF Faculty Institute used a new content alignment method that was reported on at NCME (Miller, Setzer, Sundre & Zeng, 2007)  Forms were custom made for each institution Example Content Alignment form.doc

Early content validity evidence Results strongly support generalizability of test items  Truman State: 100% of items mapped to their objectives  Michigan State: 98% (1 item not mapped)  Virginia State: 97% (2 items unmapped)  St. Mary’s: 92% (5 items not mapped) Mapping of items alone is not sufficient Balance across objectives must be obtained Teams then created additional items to cover identified gaps in content coverage  14 for MSU; 11 for St. Mary’s; 10 for Truman State; 4 for VSU

Step 2: Data Collection and Analysis During Fall 2007 semester, test was administered to students at 3 of the 4 partner institutions Spring 2008 – data collection from students at sophomore level or above Results so far  Means not given: This activity is not intended to promote comparison of students across institutions  At this stage, reliabilities provide the most compelling generalizability evidence; of course, the upcoming validity studies will be informative

ScoreJMU freshmen N=1408 SMU freshmen N=426 TSU Jrs/Srs N=345 VSU N=653 MSU N=1029 QRα =.64α =.63α =.66α =.55-- SRα =.71α =.75α =.72α =.65-- Total Score NW-9 α =.78α =.81α =.79α =.73α =.71

Research at JMU Standard Setting to aid in interpretation Validity evidence: Instrument aligns with curriculum

Standard Setting Used Angoff Method to set standards Our process was informal, unique Results look meaningful but we’ll reevaluate as we collect more data in upcoming administrations

Faculty Objective Standards

Validity evidence for instrument and curriculum at JMU VariablesPearson’s r Freshman QR9 score & AP credits 0.28 Freshman QR9 score & DE credits 0.21 Freshman SR9 score & AP credits 0.24 Freshman SR9 score & DE credits 0.20

Validity evidence for instrument and curriculum at JMU -- 2 VariablesPearson’s r Soph/Jr. NW9 score & AP credits 0.16 Soph/Jr. NW9 score & DE credits 0.01

Phase II studies Samples of Upcoming Studies: Correlational Studies: Is there a relationship between scores on the QR/SR and other standardized tests? … and other academic indicators? Comparison of means or models: Is there a variation in the level of student achievement based upon demographic variables? Is there a relationship between scores on the QR/SR and declared majors? Can this instrument be used as a predictor for success and/or retention for specific majors? Qualitative Research: Will institutional differences be reflected in the results of a qualitative interview that accompanies the administration of QRSR?

References Dawis, R. (1987). Scale construction. Journal of Counseling Psychology, 34, Hersh, R. H., & Benjamin, R. (2002). Assessing selected liberal education outcomes: A new approach. Peer Review, 4 (2/3), Miller, B. J., Setzer, C., Sundre, D. L., & Zeng, X. (2007, April). Content validity: A comparison of two methods. Paper presentation to the National Council on Measurement in Education. Chicago, IL.