Presentation is loading. Please wait.

Presentation is loading. Please wait.

Developing General Education Course Assessment Measures Anthony R. Napoli, PhD Lanette A. Raymond, MA Office of Institutional Research & Assessment Suffolk.

Similar presentations


Presentation on theme: "Developing General Education Course Assessment Measures Anthony R. Napoli, PhD Lanette A. Raymond, MA Office of Institutional Research & Assessment Suffolk."— Presentation transcript:

1 Developing General Education Course Assessment Measures Anthony R. Napoli, PhD Lanette A. Raymond, MA Office of Institutional Research & Assessment Suffolk County Community College http://sccaix1.sunysuffolk.edu/Web/Central/IT/InstResearch/

2 Why Validity & Reliability ?  Assessment results must represent student achievement of course learning objectives  Evaluation of the validity and reliability of the assessment measure provides the evidence that it does so

3 Types of Measures  ‘Performance’ Measures  ‘Objective’ Measures

4 Validity for Performance Measures  Identified learning outcomes represent the course (domain sampling)  The measure addresses the learning outcomes (content validity)  There is a match between the measure and the rubric (criteria for evaluating performance)  Rubric scores can be linked to the learning outcomes, and indicate the degree of student achievement within the course

5 Validity for Objective Measures  Identified learning outcomes represent the course (domain sampling)  The items on the measure address specific learning outcomes (content validity)  Scores on the measure can be applied to the learning outcomes, and indicate the degree of student achievement within the course

6 Content Validity ( MA23 )

7

8

9 Content Validity ( MA61 )

10

11 Content Validity ( SO11 ) ObjectiveIIIIIIDescription Identify the basic methods of data collection Demonstrate an understanding of basic sociological concepts and social processes that shape human behavior Apply sociological theories to current social issues A 30-item test measured students’ mastery of the objectives

12 Content Validity ( SO11 )

13

14

15 Reliability  Can it be done consistently?  Can the rubric be applied consistently across raters -- Inter-rater reliability  Can each of the items act consistently as a measure of the construct -- Inter-item reliability

16 Inter-Rater Reliability ( MA23 ) – The Rubric Item 1A

17 Inter-Rater Reliability ( MA23 ) – The Rubric Item 1B

18 Inter-Rater Reliability ( MA23 ) – The Data Set

19 Inter-Rater Reliability ( MA23 ) – Results ItemLowHighRange 1A.98374.215.000.79 1B.98773.424.140.72 1C.97151.712.851.04 2A.99663.573.780.21 2B.99741.932.140.21 3A.97344.004.570.57 3B.99633.573.850.28

20 Inter-Item Reliability ( MA61 ) Objective 1 Demonstrate an understanding of a mathematical function (range and domain, symmetric, composite, inverses). Items 1a, 1b, 2, 3,4, 8a,10a..61 2 Understand the zeros of quadratic functions and sketch graphs of quadratic functions. Items 5a-d..65 3 Comprehend the significance of the fundamental theorem of algebra and be able to solve a polynomial and write a polynomial in its factored form. Items 6a-f, 7..72 5 Sketch the graph of rational functions. Items 8b-d..83 6 Sketch the graphs of exponential and logarithmic functions. Items 9a-c, 10a-d..83 7 Solve exponential and logarithmic equations. Items 11,12..52 8 Understand and graph the trigonometric functions, and solve applications using right triangle relationships. Items 13a-c, 14..68

21 Face Validity and Reliability… Is this enough?  Measures with face validity & adequate levels of reliability can produce misleading/inaccurate results.  Even content valid measures cannot guarantee accurate estimates of student achievement MA23 – earlier pilot

22 Criterion-Related Validity (MA23 – earlier pilot)

23 Criterion-Related Validity (MA23)

24 Criterion-Related Validity (MA61)

25 Motivational Comparison ( PC11 )  2 Groups Graded Embedded Questions Non-Graded Form & Motivational Speech  Mundane Realism

26 Motivational Comparison ( PC11 )  Graded condition produces higher scores (t(78) = 5.62, p <.001).  Large effect size (d = 1.27). (d = 1.27).

27 Motivational Comparison ( PC11 )  Minimum competency 70% or better 70% or better  Graded condition produces greater competency (Z = 5.69, p <.001).

28 Motivational Comparison ( PC11 )  In the non-graded condition this measure is neither reliable nor valid KR-20 N-g = 0.29

29 Motivational Comparison ( PC11 )

30 Developing General Education Course Assessment Measures Anthony R. Napoli, PhD Lanette A. Raymond, MA Office of Institutional Research & Assessment Suffolk County Community College http://sccaix1.sunysuffolk.edu/Web/Central/IT/InstResearch/


Download ppt "Developing General Education Course Assessment Measures Anthony R. Napoli, PhD Lanette A. Raymond, MA Office of Institutional Research & Assessment Suffolk."

Similar presentations


Ads by Google