Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methods in Psychology

Similar presentations


Presentation on theme: "Research Methods in Psychology"— Presentation transcript:

1 Research Methods in Psychology

2 I. Deduction versus Induction
A. Deduction: the process of reasoning in which a conclusion follows necessarily from the stated premises. B. Induction: the process of inferring a general principle from observations. II. Scientific Theory A. Theory: an explanation or model created from a great many observations and capable of making valid predictions or hypotheses. B. Falsifiable: stated in such clear, precise terms that we can see what evidence would count against it. C. Burden of Proof: the obligation to present evidence to support one’s claim.

3 III. Scientific Method: the way in which
scientists go about investigating and making claims about phenomena. Hypothesis: a tentative explanation for an observation that can be tested through research. B. Method: the process by which you test your hypothesis. C. Results: the recorded outcome of the method. D. Interpretation: your evaluation of the results. E. Replicability: the ability for other people to replicate previous results through further experimentation using the same procedures.

4 F. Meta-Analysis: an analysis that combines the results from
many studies and then analyzes them as if they were all from one large study. G. Occam’s Razor: the explanation that’s most simple is usually the most accurate. 1) Aliens! 2) ESP!

5 IV. Conducting Psychological Research
A. Operational Definitions: a definition that specifies the procedures used to produce or measure something. B. Population: the entire group of people to be considered. C. Sample: a small number of people taken from the population. 1) Convenience Sample: a sample that can include anyone. 2) Representative Sample: a sample that closely resembles the population you are studying. 3) Random sample: each member of the population has an equal chance of being selected for the sample.

6 V. Eliminating the Influence of Expectations
A. Experimenter Bias: the tendency of an experimenter to distort or misperceive the results of an experiment based on the expected outcome. B. Blind Observer: someone who can record data without knowing the experimenter’s expected outcome. C. Placebo: a pill with no pharmacological effects. D. Single-Blind Study: either the observer or the participants are unaware of which participants received which treatment. E. Double-Blind Study: both the observer and the participants are unaware of who’s in what condition. F. Demand Characteristics: cues that tell a participant what is expected of him or her and what the experimenter hopes to find.

7 VI. Forms of Data Collection
A. Laboratory Observation: behavior is observed and recorded in a controlled environment. B. Naturalistic Observation: a careful examination of what happens under more or less natural conditions. C. Case History: a thorough description of the person, including the person’s abilities and disabilities, medical conditions, life history, unusual experiences, or whatever else seems relevant. D. Survey: a study of the prevalence of certain beliefs, attitudes, or behaviors based on people’s responses to specific questions. 1) Sampling… doing this correctly is really important with surveys. 2) Survey Scales… Likert versus VAS

8 VII. Correlational Studies
A. Correlation: a measure of the relationship between 2 variables. B. Correlational Study: a procedure in which investigators measure the correlation between 2 variables without controlling for either of them. C. Correlation Coefficient: a mathematical estimate of the relationship between 2 variables. The range is –1 to +1. D. Illusory Correlation: an apparent relationship based on casual observations of unrelated or weakly related events. E. Meaningless Correlation… it’s meaningless. F. Correlation vs. Causation… just because two variables are correlated doesn’t mean that one causes the other.

9 VIII. Causation A. Experiment: a study in which the investigator manipulates at least one variable while measuring at least one other variable. B. Independent Variable: the item that the experimenter manipulates to get an effect. C. Dependent Variable: the item that the experimenter measures to see if the independent variable had an effect. D. Experimental Group: group that receives the treatment (Independent Variable) that an experiment is designed to test. E. Control Group: group that is treated just like the experimental group, but does not receive the treatment. F. Random Assignment: experimenter uses some random process of assigning people to each group.

10 IX. Other Factors A. Ethical Concerns with Humans: experimenters must be careful that the designs of the their studies do not harm participants mentally, emotionally, or physically. B. Ethical Concerns with Non-Humans: the same concerns as with humans, but more lenient. C. Informed Consent: a statement informing participants what to expect in an experiment and that requires their acceptance of the procedures. D. Debriefing: an important post-experiment interview between experimenters and participants verifying that participants are fully informed about, and were not harmed in any way by, their experience in an experiment.

11 X. The Evaluation of Psychological Tests
A. Standardization: the process of establishing rules for administering a test and for interpreting the scores. B. Norms: the descriptions of how frequently various scores occur. 1) EXAMPLE: The Distribution of IQ Scores 2) Follows a Normal Distribution (bell-shaped curve) a) The average IQ score for all age groups is designated as 100. b) 68% of people supposedly fall within one SD unit above and below the mean. c) 95% supposedly fall within two SD units in either direction. 2 SD above = gifted 2 SD below = mentally challenged

12 C. Reliability: the repeatability of the test’s scores.
1) Test-Retest Reliability… a) Test the same group of people twice with the same test. b) Test the same group of people twice with equivalent versions of the test. D. Validity: a determination of how well a test measures the thing it claims to measure. 1) Content Validity: the test’s items accurately represent parts of the information that the test is meant to measure. 2) Construct Validity: the validity of the theoretical construct that the test is designed to measure. 3) Predictive Validity: the ability of the test to predict some real-world task performance. E. Utility: a test’s usefulness for a practical purpose.


Download ppt "Research Methods in Psychology"

Similar presentations


Ads by Google