Download presentation
Presentation is loading. Please wait.
Published byMelanie Nicholson Modified over 9 years ago
1
Beginning with the End in Mind: Choosing Outcomes and Methods
Dr. Kim Yousey-Elsener Associate Director of Assessment Programs StudentVoice
2
Goals of this session: Understand the importance of assessment goals/objectives Define key terms related to methods Explain different types of assessment Determine key factors in choosing methods
3
Begin with the end in mind…
Key questions to ask: Why are you doing this assessment? What do you hope to learn from doing the assessment? Who is your audience for you assessment results? Does that audience like numbers, stories, or both? Time to think…..answer these questions as best you can for your project.
4
Start with the “Why” and “What”
Learning Outcomes Program Outcomes Goals or sub-goals Objectives Questions An outcome is the desired effect of a service or intervention, but is much more specific than a goal. It is participant or output centered.
5
Good Outcome Statements
Translate intentions into actions Describe what participants should demonstrate or produce Use action verbs Align with other intentions (institutional, departmental) Map to practices Are collaboratively authored Reflect/complement existing national criteria Are measureable Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution.
6
Aggressive but Attainable: Consider stretch targets to improve program
SMART Outcomes Specific: Clear and definite terms describing expected abilities, knowledge, values, attitudes, and performance Measurable: It is feasible to get the data, data are accurate and reliable, it can be assessed more than one way Aggressive but Attainable: Consider stretch targets to improve program
7
Results-oriented: Describe what standards are expected of students
SMART Outcomes Results-oriented: Describe what standards are expected of students Time-bound: Describe where you would like to be within a specified period of time Adapted from Paula Krist, Director of Operational Effectiveness and Assessment Support, University of Central Florida, May 2006.
8
If you’re not sure, this may help….
Some things to think about: What causes it? Who is especially involved in it? When does it occur? What effects does it have? What types are there? How do various groups perceive it? In what stages does it occur? What will make it better? What makes it effective? What relationship does it have to other phenomena?
9
Clues to Help You Find a Direction:
Usage Numbers – tracks participation in programs or services Student needs - keeps you aware of student body or specific populations Student satisfaction/Perceptions – Level of satisfaction with or percept of campus Learning Outcomes – show a specific program is meeting objectives (Blooms Taxonomy)
11
Clues to Help You Find a Direction:
Cost Effectiveness – how does a program/service being offered compare with cost Comparable (Benchmarking) - Comparing a program/service against a comparison group Using National Standards (i.e. CAS) – Comparing a program/service with a set of pre-established standards Campus Climate or Environment – assess the behaviors/attitudes on campus
12
But what about the who? An “Quickie” Outcome:
By the end of this program student will… Through interacting with this office people will…. This office plans to….by…. By participating in…….students will…. But what about the who? Keep the “Who” in mind when phrase outcomes
13
Choosing Your Method: Matches: Measure directly matches to the outcome it is trying to measure Appropriate methods: Uses appropriate direct and indirect methods Targets: Indicates desired level of performance Useful: Measures help identify what to improve Reliable: Based on tested, known methods Effective and Efficient: Characterize the outcome concisely
14
Consider These Key Factors:
What tools are in your toolbox? What are the strengths/challenges of each tool? What is your timeline? What resources (time, $$, people) do you have? Is there potential for collaboration? Does the data already exist? What politics are involved? (internal vs. external method) Who is your audience and what type of data would they find useful? (Quantitative vs. Qualitative) Do you need indirect or direct measures? Do you need formative or summative data? Or both?
15
What type of data do you need?
Quantitative Qualitative Focus on numbers/numeric values Easier to report and analyze Can generalize to greater population with larger samples Less influenced by social desirability Sometimes less time, money Examples of Quantitative methods: Survey Usage numbers Rubrics (if assigning #’s) Tracking numbers Focus on text/narrative from respondents More depth/robustness Ability to capture “elusive” evidence of student learning and development Specific sample Examples of Qualitative methods: Interview Focus Group Portfolios Rubrics (if descriptive) Photo Journaling
16
Direct vs. Indirect Methods
Direct Methods - Any process employed to gather data which requires students to display their knowledge, behavior, or thought processes. Indirect Methods - Any process employed to gather data which asks students to reflect upon their knowledge, behaviors, or thought processes.
17
Example: Direct vs. Indirect
INDIRECT: Please rate your level of agreement with the following…. I know of resources on campus to consult if I have questions about which courses to register for in the fall. Strongly agree Moderately agree Moderately disagree Strongly disagree DIRECT: Where on campus would you go or who would you consult with if you had questions about which courses to register for the fall? Open text field
18
Formative vs. Summative
Formative Assessments: Conducted during the program Purpose is to provide feedback Use to shape, modify or improve program Summative Assessment: Conducted after the program Makes judgment on quality, worth, or compares to standard Can be incorporated into future plans
19
Validity and Reliability
The “thing” we are trying to measure The Questions on the Test or Instrument It’s often easy to think of reliability and validity in this way: Imagine the questions on a test are the red dots and the bulls eye is the thing we’re trying to measure For example, the bulls eye might be some aspect of leadership like group cohesion, and the items might ask things like “how well do you get along well with others in a group?” and “when in a group setting, to what degree to others see you as a positive member of the group?” Special Thanks to: Peter Swerdzewski,
20
Results are reliable. Results are valid.
Can have reliability without validity! Results are reliable. Results are NOT valid. We’re consistently measuring something, but what is it? In the first example: The red dots that resemble our questions are all grouped together, suggesting that students are answering them consistently, thus the results are reliable. The red dots are all sitting right on our bulls eye. Remember that the bulls eye represents the thing we’re trying to measure, like leadership, so the results are all valid. In the second example: The red dots—the questions on our test—are all grouped together, so the results are consistent…there is a high degree of reliability. HOWEVER, the results are not ‘hitting’ the bulls eye! We think we’re measuring something like leadership, but we’re not, so the results are not valid. In the third example: The red dots—our test questions—are all over the place. They are not consistent. An example of this would be a student reading the question “how well do you get along well with others in a group?” and indicating “Not at all”, then reading the question “when in a group setting, to what degree to others see you as a positive member of the group?” and answering “Always”. These two questions measure a very similar thing, but the results are not consistent! Also, the red dots—the questions—are not hitting the bulls eye—the thing we’re trying to measure. Basically, this would be a test or instrument with a bunch of random questions that don’t really mean anything when taken together. You wouldn’t want to add up the results from this test and make decisions based on the total score…it wouldn’t make sense! Results are NOT reliable. Results are NOT valid. Slide courtesy of Dr. Dena Pastor, James Madison University Special Thanks to: Peter Swerdzewski,
21
Tips to Choosing Methods:
Build up your assessment toolbox and know your options Assessment is to inform practice (KISS), Start off small (especially if resistent) Too much data can slow you down Assessment is an ongoing process. Reflect on process/results, don’t be afraid to change Read literature/attend conferences through a new lens Talk and get feedback Ask questions
22
Always ask if the data already exists
Include stakeholders from the beginning, use external sources as needed Start with the ideal design, then work backwards to what is possible Decide what you will accept as sufficient evidence, but keep your audience in mind Always interpret your results in light of your design
23
Questions?
24
Resources: Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Palomba, C.A. & Banta, T.W. (1999). Assessment essentials: Planning, implementing and improving assessment in higher education. San Francisco: Jossey-Bass Schuh, J.H. (2009). Assessment methods for student affairs. San Francisco: Jossey-Bass. Stage, F.K. and Manning, K. (2003). Research in the college context: Approaches and methods. New York: Brunner-Routledge. Upcraft, M.L., Schuh, J.H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass
25
Contact Information Kim Yousey-Elsener, PhD. Associate Director, Assessment Programs 210 Ellicott Street, Suite 200 Buffalo, NY 14203 , press 1 when you hear the recording T Ellicott Street, Suite 200 F Buffalo, New York 14203
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.