Elliott / October 20071 Understanding the Construct to be Assessed Stephen N. Elliott, PhD Learning Science Institute & Dept. of Special Education Vanderbilt.

Slides:



Advertisements
Similar presentations
Objectives Create an action query to create a table
Advertisements

Chapter 3 Introduction to Quantitative Research
Chapter 3 Introduction to Quantitative Research
Understanding the ELA/Literacy Evidence Tables. The tables contain the Reading, Writing and Vocabulary Major claims and the evidences to be measured on.
Action Research Not traditional educational research often research tests theory not practical Teacher research in classrooms and/or schools/districts.
Cross Cultural Research
Cal State Northridge Psy 427 Andrew Ainsworth PhD
1 Content-based Interpretations of Test Scores Michael Kane National Conference of Bar Examiners Maryland Assessment Research Center for Education Success.
Developing the Research Question: From Interest to Science Samuel R. Mathews, PhD. The University of West Florida Pensacola, Florida, USA and Visiting.
Constructing Hypotheses
Dr Jim Briggs Masterliness Not got an MSc myself; BA DPhil; been teaching masters students for 18 years.
Assessment: Reliability, Validity, and Absence of bias
Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI International Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI.
Middle Years Programme
1 An Introduction to Validity Arguments for Alternate Assessments Scott Marion Center for Assessment Eighth Annual MARCES Conference University of Maryland.
VALIDITY.
Concept of Measurement
Chapter Two SCIENTIFIC METHODS IN BUSINESS
Principles of High Quality Assessment
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Alignment Inclusive Assessment Seminar Brian Gong Claudia.
Classroom Assessment A Practical Guide for Educators by Craig A
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Construct Validity By Michael Kotutwa Johnson Submitted October 23, 2006 AED 615 Professor Franklin.
Assessment Report Department of Psychology School of Science & Mathematics D. Abwender, Chair J. Witnauer, Assessment Coordinator Spring, 2013.
How to Write a Critical Review of Research Articles
Writing research proposal/synopsis
Ways for Improvement of Validity of Qualifications PHARE TVET RO2006/ Training and Advice for Further Development of the TVET.
Validity & Practicality
BUSINESS INFORMATICS descriptors presentation Vladimir Radevski, PhD Associated Professor Faculty of Contemporary Sciences and Technologies (CST) Linkoping.
A COMPETENCY APPROACH TO HUMAN RESOURCE MANAGEMENT
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
1 Issues in Assessment in Higher Education: Science Higher Education Forum on Scientific Competencies Medellin-Colombia Nov 2-4, 2005 Dr Hans Wagemaker.
Assessing Quality for Integration Based Data M. Denk, W. Grossmann Institute for Scientific Computing.
The present publication was developed under grant X from the U.S. Department of Education, Office of Special Education Programs. The views.
CCSSO Criteria for High-Quality Assessments Technical Issues and Practical Application of Assessment Quality Criteria.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Achievethecore.org 1 Setting the Context for the Common Core State Standards Sandra Alberti Student Achievement Partners.
Introduction to Validity
Illustration of a Validity Argument for Two Alternate Assessment Approaches Presentation at the OSEP Project Directors’ Conference Steve Ferrara American.
Construct-Centered Design (CCD) What is CCD? Adaptation of aspects of learning-goals-driven design (Krajcik, McNeill, & Reiser, 2007) and evidence- centered.
EPSY 546: LECTURE 3 GENERALIZABILITY THEORY AND VALIDITY
Thomson South-Western Wagner & Hollenbeck 5e 1 Chapter Sixteen Critical Thinking And Continuous Learning.
1 The Theoretical Framework. A theoretical framework is similar to the frame of the house. Just as the foundation supports a house, a theoretical framework.
Question paper 1997.
The Development and Validation of the Evaluation Involvement Scale for Use in Multi-site Evaluations Stacie A. ToalUniversity of Minnesota Why Validate.
Alternative Assessment Chapter 8 David Goh. Factors Increasing Awareness and Development of Alternative Assessment Educational reform movement Goals 2000,
Spring 2015 Kyle Stephenson
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
What does exam validity really mean? Andrew Martin Purdue Pesticide Programs.
N ational Q ualifications F ramework N Q F Quality Center National Accreditation Committee.
ASSESSMENT CRITERIA Jessie Johncock Mod. 2 SPE 536 October 7, 2012.
Michigan Assessment Consortium Common Assessment Development Series Module 16 – Validity.
Dr. Leslie David Burns, Associate Professor Department of Curriculum and Instruction UK College of Education
Barry O’Sullivan | British Council Re-conceptualising Validity in High Stakes Testing Invited Seminar February 11 th 2015 University.
RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.
Copyright © Springer Publishing Company, LLC. All Rights Reserved. DEVELOPING AND USING TESTS – Chapter 11 –
Planning a unit on sound using the principles of GPS. GPS Glossary.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
Introduction to the Validation Phase
QUESTIONNAIRE DESIGN AND VALIDATION
Test Blueprints for Adaptive Assessments
Week 3 Class Discussion.
PSY 614 Instructor: Emily Bullock Yowell, Ph.D.
Validating Interim Assessments
Reliability and Validity of Measurement
VALIDITY Ceren Çınar.
Assessment Literacy: Test Purpose and Use
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Chapter 8 VALIDITY AND RELIABILITY
Chapter 3: How Standardized Test….
Presentation transcript:

Elliott / October Understanding the Construct to be Assessed Stephen N. Elliott, PhD Learning Science Institute & Dept. of Special Education Vanderbilt University

Elliott / October Construct: Dictionary Definition To form by assembling parts; build. To create by systematically arranging ideas or expressions. Something, especially a concept, that is synthesized or constructed from simple elements.

Elliott / October Basic Premises Understanding the construct to be assessed is critical to developing a plan to validate the resulting test score interpretation. Understanding what is meant by the term construct is important to facilitating communication with test score users and others interested in student achievement.

Elliott / October Constructs & Test Score Validity: Some History The term construct –logical or hypothetical – originated in Betrand Russells 1929 maxim that wherever possible, logical constructions are to be substituted for inferred entities. McCorquodale & Meehl (1948) distinguished hypothetical constructs (unobservable, inferred entities) from intervening variables (abstractions from observations). Since the 1954 Test Standards published by APA. Construct validity was defined in the Standards as the degree to which the individual possesses some hypothetical trait or quality [construct] presumed to be reflected in the test performance.

Elliott / October More History: Construct as Attribute The concept of validating a construct was more fully developed by Cronbach & Meehl (1955) who referred to a construct as an attribute. They went on to list construct validation procedures (a) criterion-group differences, (b) factor analysis, (c) item analysis, (d) experimental studies, and (e) studies of process. Through work of Cronbach with contributions from Messick (1980, 1989), the common view is one conception of validity referred to as construct validity. Thus, the validation of a test score can be taken to include every form of evidence that the score to some acceptable extent measures a specified attribute – quantifiable property of quality – of a respondent.

Elliott / October Nature of Attributes Observable and unobservable Achievements and aptitudes Levels of Inference: Abstractive to existential Thank goodness for test items that yield scores! Items help defined the content from which we make attributions. These attributions often take the form of a test score interpretation.

Elliott / October Test Score Interpretation The proposed interpretation refers to the construct or concepts the test is intended to measure. Examples of constructs are mathematics achievement, performance as a computer technician, …. To support test development, the proposed interpretation is elaborated by describing its scope and extent and by delineating the aspects of the construct that are to be represented. The detailed description provides a conceptual framework for the test, delineating the knowledge, skills, abilities, …to be assessed. (AERA, APA, & NCME, 1999, p. 9)

Elliott / October Our World: Student Achievement We are interested in understanding student achievement. That is, the knowledge and skills students posses at a given point in time in content domains of language arts, mathematics, and science. We gain insights into student achievement by observing the amount or quantity of knowledge and skills students posses in these defined content domains. This amount or quantity of the measured attribute takes the form of a test score. We attribute more knowledge or skills for samples of behavior or work where students demonstrate correct responses to a correspondingly larger number or more complex type of items. Our interpretations about student attributes are situated within broad academic content domains and framed by performance level descriptors.

Elliott / October Construct Logic Simplified Observed & Inferred Performances on Item/Task Test Score Interpretation & Abstracted Attribution Test Score

Elliott / October Unified View of Validity 1985 Test Standards and Messicks epic chapter united all types of validity under construct validity. As described by Messick, construct validity is …the unifying concept of validity that integrates content and criterion considerations into a common framework for testing rational hypotheses about theoretically relevant hypotheses. (1989)

Elliott / October Information Sources for the Constructs Assessed with AAs States academic content standards, States academic achievement standards, in particular, the Performance Level Descriptors for each content area, Validity & alignment studies as reported in Alternate Assessment Technical Manuals, and Reports to consumers of the assessment results.

Elliott / October Sample Content Framework

Elliott / October Sample Performance Level Descriptors

Elliott / October Sample Evidence Based Support for Construct Claims

Elliott / October Another Sample of Evidence to Support Construct Claims

Elliott / October More on Validity & Test Score Interpretation As we investigate the constructs measured by alternate assessments, we are confronted with a number of issues that affect the validity of the test score interpretation. For example: Teachers support and prompting, Tests with items or tasks that are non-academic, Assessments that sample a limited portion of the intended domain, and Item or task rubrics that score for more than achievement.

Elliott / October Construct Underrepresentation & Construct Irrelevant Variance

Elliott / October Understanding the construct assessed is foundational A validity argument provides an overall evaluation of the plausibility of the proposed interpretations and uses of test scores….To evaluate …a test score interpretation, it is necessary to be clear about what the interpretation claims. (Kane, 2002)

Elliott / October Thanks & more Key References AERA, APA, & NCME (1999). Standards for educational and psychological testing. Washington, DC: Authors. Kane, M. (2002). Validating high-stakes testing programs. Educational Measurement: Issues and Practices, 21 (1), McDonald, R.P. (1999). Test theory: A unified treatment. Mahwah, NJ: LEA. Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, Vol. 18 (2), Contact Information