Computer Science Education Debra Lee Davis, M.S., M.A., Ph.D.

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Experimental and Ex Post Facto Designs
The Ways and Means of Psychology STUFF YOU SHOULD ALREADY KNOW BY NOW IF YOU PLAN TO GRADUATE.
Correlation AND EXPERIMENTAL DESIGN
Experimental Design Week 9 Lecture 1.
Quasi-Experimental Designs
The Methods of Social Psychology
Statement of the Problem Goal Establishes Setting of the Problem hypothesis Additional information to comprehend fully the meaning of the problem scopedefinitionsassumptions.
ICS 463, Intro to Human Computer Interaction Design: 9. Experiments Dan Suthers.
Research Methods in Psychology Pertemuan 3 s.d 4 Matakuliah: L0014/Psikologi Umum Tahun: 2007.
Chapter 9 Experimental Research Gay, Mills, and Airasian
Experimental Research
Computer Science Education Debra Lee Davis, M.S., M.A., Ph.D.
Methods of Psychology Hypothesis: A tentative statement about how or why something happens. e.g. non experienced teachers use corporal punishment more.
Experimental Design The Gold Standard?.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
1 Psych 5500/6500 Confounding Variables Fall, 2008.
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
+ Controlled User studies HCI /6610 Winter 2013.
Types of Experiments & Research Designs UAPP 702: Research Design for Urban & Public Policy Based on notes from Steven W. Peuquet. Ph.D.
Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Quantitative Research Designs
Day 6: Non-Experimental & Experimental Design
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
Ms. Carmelitano RESEARCH METHODS EXPERIMENTAL STUDIES.
Learning Objectives Copyright © 2004 John Wiley & Sons,Inc Primary Data Collection: Experimentation CHAPTER Seven.
Module 4 Notes Research Methods. Let’s Discuss! Why is Research Important?
Chapter 7 Experimental Design: Independent Groups Design.
The Scientific Method in Psychology.  Descriptive Studies: naturalistic observations; case studies. Individuals observed in their environment.  Correlational.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
The Experiment Only research method capable of showing cause and effect.
Group Quantitative Designs First, let us consider how one chooses a design. There is no easy formula for choice of design. The choice of a design should.
The Research Enterprise in Psychology
Causal inferences During the last two lectures we have been discussing ways to make inferences about the causal relationships between variables. One of.
Research Methods in Psychology Group Activity Friday August 5, 2011.
Assumes that events are governed by some lawful order
Research Methods in Psychology
Why is Research Important?. Basic Research Pure science or research Research for the sake of finding new information and expanding the knowledge base.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Experimental Design Showing Cause & Effect Relationships.
It gives reliable and systematic ways to answer psychological questions like: How do I analyze dreams? Why are boys so weird? Other sources of info like.
Research Strategies. Why is Research Important? Answer in complete sentences in your bell work spiral. Discuss the consequences of good or poor research.
Types of Research Studies. Observation Observation is the simplest scientific technique Participant and researcher bias can occur Naturalistic observation.
1.) *Experiment* 2.) Quasi-Experiment 3.) Correlation 4.) Naturalistic Observation 5.) Case Study 6.) Survey Research.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Chapter 10 Finding Relationships Among Variables: Non-Experimental Research.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Statement of the Problem Goal Establishes Setting of the Problem hypothesis Additional information to comprehend fully the meaning of the problem scopedefinitionsassumptions.
Chapter 11.  The general plan for carrying out a study where the independent variable is changed  Determines the internal validity  Should provide.
Aim: What factors must we consider to make an experimental design?
Experimental and Ex Post Facto Designs
Causal inferences This week we have been discussing ways to make inferences about the causal relationships between variables. One of the strongest ways.
 Allows researchers to detect cause and effect relationships  Researchers manipulate a variable and observe whether any changes occur in a second variable.
Chapter 2: The Research Enterprise in Psychology.
School of Public Administration & Policy Dr. Kaifeng Yang 研究设计 : 实验研究的基本问题.
The Scientific Method. Scientifically Solving a Problem Observe Define a Problem Review the Literature Observe some More Develop a Theoretical Framework.
Practical Research: Planning and Design, Ninth Edition Paul D. Leedy and Jeanne Ellis Ormrod © 2010 Pearson Education, Inc. All rights reserved. 1 Chapter.
Module 02 Research Strategies.
2 independent Groups Graziano & Raulin (1997).
Designing an Experiment
Introduction to Design
Scientific Method Attitude Process
Research Methods for the Behavioral Sciences
Research Methods & Statistics
Reminder for next week CUELT Conference.
Chapter Ten: Designing, Conducting, Analyzing, and Interpreting Experiments with Two Groups The Psychologist as Detective, 4e by Smith/Davis.
Psychological Experimentation
Presentation transcript:

Computer Science Education Debra Lee Davis, M.S., M.A., Ph.D.

What to Research? What question(s) do we want to answer? What do we already know?

What to Research?  What question(s) do we want to answer? I want to find out whether the use of a learning system that incorporates social media capabilities and gamefication will increase student learning and retention in CS

What to Research?  What do we already know? We know that following has been found: ○ Greater student engagement = > greater learning and retention ○ Social media has strong engagement in XXX ○ The use of achievements in games => resilience & increased effort when failure occurs ○ Having students work together on problems increases learning ○... Etc.

What to Research?  Formulate Goals/Objectives We want to determine whether the use of team-based achievement goals increases student learning of software testing techniques  Formulate Research Questions Is it testable?

What to Research?  Formulate Research Questions Is it testable? ○ Is there a significant difference between student learning of software testing techniques for students that used WReSTT’s team-based achievement goals capabilities and those who did not? ○ Is there a significant relationship between the level of achievement a team is able to successfully achieve and how well the students in that team are able to correctly apply software testing techniques?

Types of Significance  Statistical Significance Helps you learn how likely it is that these changes occurred randomly and do not represent differences due to the program p values  Substantive Significance Helps you understand the magnitude or importance of the differences Effect Size

What to Research?  Formulate Research Questions Is it testable? ○ Is there a significant difference between student learning of software testing techniques for students that used WReSTT’s team-based achievement goals capabilities and those who did not? ○ Is there a significant relationship between the level of achievement a team is able to successfully achieve and how well the students in that team are able to correctly apply software testing techniques?

How to test our RQs How will we test it? Is there a significant difference between student learning of software testing techniques for students that used WReSTT’s team-based achievement goals capabilities and those who did not? How do we define student learning? ○ Ability to recall what types of software testing techniques should be used in different situations ○ Ability to correctly conduct appropriate software testing techniques when given a sample scenario

How to test our RQs Measures/Instruments  Validity – Are you measuring what you think you’re measuring?  Reliability – How consistent is your measure? You should get similar results if the experiment is repeated again under similar situation

Validity and Reliability

Common Myths  Myth 1 “Collect whatever data you can, and as much of it as you can. You can figure out your questions later. Something interesting will come out and you will be able to detect even very subtle effects” Reality - NO! Generally collecting lots of data without a plan just gets you lots of garbage, and you may even go hunting to try to come up with questions to ask based on the data you collected instead of your goals.

Common Myths  Myth 2 “It’s better to spend time collecting data than sitting around thinking about collecting data, just get on with it” Reality - A well designed experiment will save you tons of time.

Common Myths  Myth 3 “It does not matter how you collect your data, there will always be a statistical ‘fix’ that will allow you to analyze them” Reality - NO WAY! You may end up with data that you can’t make any conclusions with because there are so many flaws with the design. Big problems are non-independence and lack of control groups.

Research Methodology  Types of Studies Relationships (Correlations) Cause & Effect (Experimental)

What is Experimental Design? An experiment is defined as a research process that allows study of one or more variables which can be manipulated under conditions that permits collection of data that show the effect of such variables in an unconfused fashion

Variables  Independent variables (IV): What you have control over and are testing; e.g., use of team achievements  Dependent variables (DV): What you are measuring; ○ e.g., student learning  Research Question: Is there a significant difference between student learning of software testing techniques for students that used WReSTT’s team- based achievement goals capabilities and those who did not?

Variables  Confounding/Extraneous variables: Variables that may be influencing the Dependent variable(s) that are not the ID; Can lead to alternate explanations for your results E.g., different course materials used in different classes

Key issues – Experimental Control  Experimental or Treatment Groups  Control Group  Random Assignment Quasi-Experimental Design  Pretests/Posttests Experimental Group –PretestTreatmentPosttest Control Group –Pretest Posttest

Good Experimental Design GroupSemesterPretestTreatmentPosttest ControlFallYesNoYes SpringYesNoYes ExperimentalFallYes SpringYes What happens if the Control and Experimental groups are in different semesters? Compare Pretests to (hopefully) show the groups are equivalent

Common Mistakes  No control group  No pretest  Confounding variables not controlled or accounted for  Attrition  Inappropriate conclusions  Causal conclusions in correlational studies  Invalid, ambiguous or mismatched measures for the questions being asked  Only looking at main effects and not interactions  Experimenter bias (cultural, gender, etc.) and “nonsense” conclusions  Inappropriate data analyses

Working with Human Subjects  Need IRB approval  Cannot force or coerce students to participate  Participants may leave the study at will  Privacy concerns  Benefits outweigh the risks to participants (prefer no risks)