Final Exam Review Psychology 242, Dr. McKirnan

Slides:



Advertisements
Similar presentations
Research Methods Chapter 2.
Advertisements

QUANTITATIVE DATA ANALYSIS
Social Research Methods
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Chapter 2: The Research Enterprise in Psychology
Chapter 5 Research Methods in the Study of Abnormal Behavior Ch 5.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Chapter 2: The Research Enterprise in Psychology
CHAPTER 4 Research in Psychology: Methods & Design
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Copyright © Allyn & Bacon 2007 Chapter 2: Research Methods.
Chapter 1: The Research Enterprise in Psychology.
The Research Enterprise in Psychology. The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each.
Analyzing and Interpreting Quantitative Data
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
Research Process Parts of the research study Parts of the research study Aim: purpose of the study Aim: purpose of the study Target population: group whose.
Lecture 5: Chapter 5: Part I: pg Statistical Analysis of Data …yes the “S” word.
1 Dr. David McKirnan, Psychology 242 Introduction to Research Cranach, Tree of Knowledge [of Good and Evil] (1472) Click “slide show”
Research Ethics:. Ethics in psychological research: History of Ethics and Research – WWII, Nuremberg, UN, Human and Animal rights Today - Tri-Council.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Experimental Psychology PSY 433 Appendix B Statistics.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Three Broad Purposes of Quantitative Research 1. Description 2. Theory Testing 3. Theory Generation.
Chapter Eight: Using Statistics to Answer Questions.
Chapter 6: Analyzing and Interpreting Quantitative Data
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
1 Foundations of Research; Statistics Cranach, Tree of Knowledge [of Good and Evil] (1472) Click “slide show” to start this presentation as a show. Remember:
1 Dr. David McKirnan, Foundations of Research Correlations: shared variance Click.
Foundations of Research Survey Research This is a PowerPoint Show Open it as a show by going to “slide show”. Click through it by pressing any key.
1 Foundations of Research Cranach, Tree of Knowledge [of Good and Evil] (1472) Click “slide show” to start this presentation as a show. Remember: focus.
Research in Psychology Chapter Two 8-10% of Exam AP Psychology.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Some Terminology experiment vs. correlational study IV vs. DV descriptive vs. inferential statistics sample vs. population statistic vs. parameter H 0.
Methods of Presenting and Interpreting Information Class 9.
Outline Sampling Measurement Descriptive Statistics:
Statistics & Evidence-Based Practice
Research Methods In Psychology
Chapter 2: The Research Enterprise in Psychology
Principles of Quantitative Research
Basics of Experimental design (Notes Week 5) Sampling (Notes Week 6)
INF397C Introduction to Research in Information Studies Spring, Day 12
Why is this important? Requirement Understand research articles
CHAPTER 4 Research in Psychology: Methods & Design
Statistics: Testing Hypotheses; the critical ratio.
Inference and Tests of Hypotheses
Statistics: The Z score and the normal distribution
Experimental Research Designs
Understanding Results
Hypothesis Testing Review
Analyzing and Interpreting Quantitative Data
Social Research Methods
Inferential statistics,
Chapter Eight: Quantitative Methods
2 independent Groups Graziano & Raulin (1997).
Module 8 Statistical Reasoning in Everyday Life
Chapter 6 Research Validity.
Review: What influences confidence intervals?
Gerald Dyer, Jr., MPH October 20, 2016
Research in Psychology
Scientific Method Steps
Analysis and Interpretation of Experimental Findings
Inferential Statistics
15.1 The Role of Statistics in the Research Process
Two Halves to Statistics
Chapter Nine: Using Statistics to Answer Questions
Research in Psychology Chapter Two 8-10% of Exam
CS 594: Empirical Methods in HCC Experimental Research in HCI (Part 1)
Presentation transcript:

Final Exam Review Psychology 242, Dr. McKirnan © Dr. David J. McKirnan, 2014 The University of Illinois Chicago McKirnanUIC@gmail.com Do not use or reproduce without permission Cranach, Tree of Knowledge [of Good and Evil] (1472) Psychology 242, Dr. McKirnan

What is science? Content Values Methods Critical thought Understand that science is not just about the numbers. Rather, it embodies some key values: Critical thought Empirical approach to understanding the world. What is science? Values Critical thought Theory: Why? or How? Evidence: How do you know? Discover the natural world Content Empirical findings: Facts Ways of classifying nature Well supported theories Methods Core empirical approach Basic experimental design Specific research procedures Statistical reasoning

Four basic sources of knowledge or information: How do we know things? Authority: Credible / powerful people Social institutions Tradition Intuition: Emotionality or a “hunch” “Emotional IQ” Empiricism: Simple sensation / perception Direct observation; data Most central to Science Rationalism: Logical coherence Articulation with other ideas

Test applications of theories What does science do? What does science do? Describe the world Leads to hypotheses Have a clear sense that science has multiple aspects Simple description Systematic, hypothesis testing. Predict events Core feature of a hypothesis: if “X” then “Y”. Test theories Cause and effect questions involving hypothetical constructs. Often controlled experiments or complex correlation designs. Test applications of theories Testing interventions or policy change Psychology 242, Dr. McKirnan Week 2: Role & structure of science.

Basic Elements of a Research Project Phenomenon Big picture / question Begin with the “big question” Core elements of a research study You should understand these steps by now Theory Hypothetical Constructs Causal explanation … articulate a clear theory Hypothesis Operational definition Specific prediction …and derive concrete hypotheses. Methods Measurement v. experimental Then specific methods, the core of a scientific study. Data / Results Descriptive data Test hypothesis Then actual data & results… Discussion Implications for theory … implications for the theory Conclusions Future research? …and larger issues. Psychology 242, Dr. McKirnan Week 2: Role & structure of science.

Basics of Design: Internal Validity Can we validly determine what is causing the results of the experiment? General Hypothesis: the outcome (DV) is caused only by the experiment itself (Independent Variable). Confound: a “3rd variable” (unmeasured variable other than the Independent Variable) actually led to the results. Core Design Issues: The experimental & control groups are exactly the same at baseline A Confound is when the groups differ for some other reason, e.g., self-selection into the study.

True v. quasi-experimental designs True experiments: Quasi-experiments: Emphasize internal validity Assess cause & effect (in relatively artificial environment) Test clear, a priori hypotheses Emphasize external validity Describe “real” / naturally occurring events Clear or exploratory hypotheses Has a control group Participants randomly assigned to exp. or control groups Participants & experimenter Blind to assignment Non-equivalent groups Existing groups Non-random assignment Participants not blind Self-selection Control study procedures Manipulate independent variable Control procedures & measures Full control may not be possible May not be able to manipulate the independent variable Partial control of procedures & measures

External validity: summary Can we validly generalize from this experiment to the larger world? Is the sample typical of the larger population? The research Sample: The research Setting: Is this typical of “real world” settings where the phenomenon occurs? Is the outcome measure represen-tative, valid & reliable? The Dependent Variable The study structure & context The Independent Variable Does the experimental manipulation (or measured predictor) actually create (validly assess…) the phenomenon you are interested in?

Validity & Research approaches Observation or Measurement Experiments Simple Description Correlational Studies Quasi-experiments “True” experiments Qualitative Quantitative Explore the actual process of a behavior. Describe a behavioral or social trend. Relate measured variables to each other to test hypotheses. Test hypotheses in naturally occurring events or field studies. Test specific hypotheses via controlled “lab” conditions. External validity Internal validity Less control: Observe / test phenomenon under natural conditions. More accurate portrayal of how it works in nature Less able to interpret cause & effect More control: Create the phenomenon in a controlled environment Address specific questions or hypotheses Better interpret cause & effect Know what these research strategies represent & how they differ. Understand the trade-off of internal & external validity across them.

Threats to internal validity (confounds): Quasi-experiments without a control group: Group Observe1 Intervention or event Observe2 Confound Observe2 Observe1 Threats to internal validity (confounds): Historical / cultural events occur between baseline & follow-up. History Individual maturation or growth occurs between baseline & follow-up. Maturation People respond to being measured or being a measured a second time. Reactive measures Extreme scores at baseline “regress” to a more moderate level over time. Statistical regression People leave the experiment non-randomly (i.e., for reasons that may affect the results…). Mortality / drop-out You do not need to memorize these, just get the logic. What is a confound? Why is that important?

Sampling Who do you want to generalize to? Sampling overview Who do you want to generalize to? Who is the target population? broad  external validity narrow  internal validity How do you decide who is a member? demographic ? behavioral? attitudes or beliefs? What do you know about the population already – what is the “sampling frame”?

Who do you want to generalize to? Sampling overview Who do you want to generalize to? Who is the target population? How do you decide who is a member? What do you know about the population already – what is the “sampling frame”? Will you use a: Probability or random sample? Most externally valid & representative Assumes: Less valid for hidden groups. Clear sampling frame Population is available Non-probability or convenience sample? targeted / multi-frame snowball… Less externally valid Best when: No clear sampling frame Hidden / avoidant population.

The Common Rule Minimize risks Risks must be reasonable The “Common Rule” criteria for Human Subjects Protection The Common Rule Minimize risks Risks must be reasonable Recruit participants equitably Informed consent Document consent Monitor for safety Protect vulnerable participants & maintain confidentiality These comprise a Cost – Benefit analysis. Understand what each of these mean.

Belmont Report (CITI training) 1. Respect For Persons Exercise autonomy & make informed choices. 2. Beneficence Minimize risk & maximize of social/individual benefit. 3. Justice Do not unduly involve groups who are unlikely to benefit. Include participants of all races & both genders Communicate results & develop programs/ interventions You know these from your CITI training. Generally understand them; be able to recognize these key values.

Qualitative or Observational Existing data Descriptive research Quantitative Qualitative or Observational Existing data Describe an issue via valid & reliable numerical measures Simple: frequency counts of key behavior “Blocking” by other variables Correlational research: “what relates to what” Study behavior “in nature” (high ecological validity). Qualitative Interviews Focus groups Textual analysis Observational Direct Unobtrusive Use existing data for new quantitative (or qualitative) analyses Accretion Study “remnants” of behavior Wholly non-reactive Archival Use existing data to test new hypothesis Typically non- reactive What does it mean for research to be ‘reactive’? Psychology 242, Dr. McKirnan Descriptive Research.

Correlation designs: Drawbacks & fixes Causality; a simple correlation may confuse cause & effect. ? Alcohol consumption Depression Confounds!; unmeasured 3rd variable problem General optimism ? Hemlines Stock market Dealing with confounds: Use complex measurements or samples to eliminate alternate hypotheses. Understand both these interpretation difficulties.

Types of numerical scales Ratio zero point grounded in physical property; values are “absolute” physical description: elapsed time, height Interval no zero point; scale values relative behavioral research, e.g., attitude or rating scales. Ordinal rank order: Simple finish place, rank in organization... Categorical ‘values’ = categories only inherent categories: ethnic group, gender, zip Continuous scales (scores on a continuum) Be able to provide or recognize examples of these scale types

Scales and Central Tendency Measure of Central tendency Typically used for: Mode (most common score)  categorical variables often: bimodal distributions Median (middle of distribution)  categorical or continuous variables highly skewed data Mean (average score)  continuous variables only “normal” distributions use different measures of central tendency.

Measures of Dispersion or Variance Two measures of variance Measures of Dispersion or Variance 1. Range of the highest to the lowest score. 2. Standard deviation of scores around the Mean “Average” amount each score deviates from the M. “Standardizes” scores to a normal curve, allowing for basic statistics. You should know these by now

Z = Z You must know the Z score X–M = S It is the core form of the critical ratio. It represents the: Strength of the experimental effect Adjusted by the amount of error variance Z = How far is your score (X) from the mean (M) X–M = S How much variance is there among all the scores in the sample [standard deviation (S)]

Z and the normal distribution The normal distribution is a hypothetical distribution of cases in a sample It is segmented into standard deviation units. Each standard deviation unit (Z) represents a fixed % of cases We use Z scores & associated % of the normal distribution to make statistical decisions about whether a score might occur by chance. Remember approximations of these numbers If you do not fully understand this slide go back to the Statistics 1 focus module and figure it out!!

Normal distribution; Z scores Use Z to evaluate a score Distance from M / “error” variance Calculate how far the score (X) is from the mean (M); X–M. “Adjust” X–M by how much variance there is in the sample via standard deviation (S). Z = X–M / S How “good” is a score of ‘6' in two groups? Table 1, high variance Mean (M) = 4, Score (X) = 6 Standard Deviation (S) = 2.4. (X-M = 6 - 4 = 2) Z (X-M/S) = 2/2.4 = 0.88 Table 2, low(er) variance Mean (M) = 4, Score (X) = 6 Standard Deviation (S) = 1.15. (X-M = 6 - 4 = 2) Z (X-M/S) = 2/1.15 = 1.74

Evaluating scores using Z C. Criterion for a “significantly good” score X = 6, M = 4, S = 2.4, Z = .88 X = 6, M = 4, S = 1.15, Z = 1.74 -3 -2 -1 0 +1 +2 +3 Z Scores (standard deviation units) If your criterion for a “good” score is that it surpass 90% of all scores… With high variance a ‘6’ is not “good”. With lower variance ‘6’ is good. 70% of cases 90% of cases I need you to understand the logic of this approach. Exam #3 study guide

Summary Statistical decisions follow the critical ratio: Z is the prototype critical ratio: How far is your score (X) from the mean (M) How much variance is there among all the scores in the sample [standard deviation (S)] Z = X–M S = t is also a basic critical ratio used for comparing groups: How different are the two group Means How much variance is there within each the two groups; (“standard error of the mean”) t = M1 – M2 = You must understand what a critical ratio is. This slide needs to make perfect sense to you!!

Plato’s Cave, 6 What does Plato’s Allegory of the Cave tell us about scientific reasoning? We cannot observe “nature” directly, we only see its manifestations or images: We are trapped in a world of immediate sensation; Our senses (or measurements…) routinely deceive us (they have error).

e.g., evolution, gravity, learning, motivation… Plato’s Cave, 2 We study hypothetical constructs; basic “operating principles” of nature e.g., evolution, gravity, learning, motivation… Processes that we cannot “see” directly… …that underlie events that we can observe. We test hypotheses about what we can see and use rational analysis – theory – to deduce what the “form” of these processes must be, and how they work.

Why can’t we just observe “nature” directly? We can only observe the effects of hypothetical constructs, not the processes themselves. We examine only a sample of the world; no sample is 100% representative of the entire population Our theory helps us develop hypotheses about what we should observe if our theory is “correct”. We test our hypotheses to infer how nature works. Our inferences contain error: we must estimate the probability that our results are due to “real” effects versus chance. You must understand these basic concepts and terms!

“Statistical significance” Testing statistical significance We assume that a score with less than 5% probability of occurring (i.e., higher or lower than 95% of the other scores) is not by chance alone … p < .05) Z > +1.98 occurs < 95% of the time (p <.05). If Z > +1.98 we consider the score to be “significantly” different from the mean To test if an effect is “statistically significant”… Compute a Z score for the effect Compare it to the critical value for p<.05; + 1.98 Really important

Statistical significance & Areas under the normal curve 95% of scores are between Z = -1.98 and Z = +1.98. -3 -2 -1 0 +1 +2 +3 Z Scores (standard deviation units) Z = +1.98 Z = -1.98 About 95% of cases 2.4% of cases 2.4% of cases

In a hypothetical distribution: (standard deviation units) Statistical significance & Areas under the normal curve With Z > +1.98 or < -1.98 we reject the null hypothesis & assume the results are not by chance alone. In a hypothetical distribution: 2.4% of cases are higher than Z = +1.98 2.4% of cases are lower than Z = -1.98 Thus, Z > +1.98 or < -1.98 will occur < 5% of the time by chance alone. 34.13% of cases 34.13% of cases Z = -1.98 Z = +1.98 2.4% of cases 95% of cases 13.59% of cases 13.59% of cases 2.4% of cases 2.25% of cases 2.25% of cases -3 -2 -1 0 +1 +2 +3 Z Scores (standard deviation units)

Critical ratio The strength of the results (our direct observation of nature) Critical ratio = Amount of error variance (the odds that our observation is due to chance) Difference between Ms for the two groups t = Variability within groups (error) Between group variance control group experimental group Mgroup2 Mgroup1 Within-group variance, group2 Within-group variance, group1 Within group variance

The Critical Ratio in action All three graphs have = difference between groups. They differ in variance within groups. The critical ratio helps us determine which one(s) represent a statistically significant difference. Be able to answer these: How do the between group variance & within group variance constitute the critical ratio. t represents the critical ratio for group comparisons: how does t vary for these three examples? Which might reflect a statistically significant difference? Low variance Medium variance High variance

<-- smaller M larger ---> “True” normal distribution Central limit theorem <-- smaller M larger ---> True Population M “True” normal distribution We test t scores (rs, fs, etc.) against a hypothetical sampling distribution of possible ts. We assume that a t derived from a smaller sample will have more error variance (within-group variance). When df > 120 we assume a perfectly normal distribution of ts. When df < 120 we compensate by becoming more conservative in our judgments. This means we set our critical value for for testing t to a higher value.

<-- smaller M larger ---> “True” normal distribution Central limit theorem <-- smaller M larger ---> True Population M “True” normal distribution Small samples: We assume each sample to have more error So a distribution of samples will be “flatter” & more errorfull. Score

<-- smaller M larger ---> “True” normal distribution Score <-- smaller M larger ---> True Population M “True” normal distribution Medium samples: We assume less error in each sample; So a distribution of samples will be more “normal”. Score

The Central Limit Theorem; large samples Score <-- smaller M larger ---> True Population M “True” normal distribution Score Large samples: We assume each sample to have less error The distribution of large samples will close to normal. Be able to apply the central limit theorem logic to evaluating t. Translate that to using the t table.

Central limit theorem & evaluating t scores Smaller samples (lower df) have more variance. So, t must be larger for us to consider it statistically significant (< 5% likely to have occurred by chance alone). Compare t to a sampling distribution based on df. Critical value for t with p <.05 goes up or down depending upon sample size (df)

A t-table specifies Critical Values: Alpha Levels 0.10 0.05 0.02 0.01 0.001 Critical values for testing whether an effect is Statistically Significant df 8 9 10 11 12 13 14 15 18 20 25 30 40 60 120  1.860 2.306 2.896 3.355 5.041 1.833 2.262 2.821 3.250 4.781 1.812 2.228 2.764 3.169 4.587 1.796 2.201 2.718 3.106 4.437 1.782 2.179 2.681 3.055 4.318 1.771 2.160 2.650 3.012 4.221 1.761 2.145 2.624 2.977 4.140 1.753 2.131 2.602 2.947 4.073 1.734 2.101 2.552 2.878 3.922 1.725 2.086 2.528 2.845 3.850 1.708 2.060 2.485 2.787 3.725 1.697 2.042 2.457 2.750 3.646 1.684 2.021 2.423 2.704 3.551 1.671 2.000 2.390 2.660 3.460 1.658 1.980 2.358 2.617 3.373 1.645 1.960 2.326 2.576 3.291 Alpha = .05, df = 8 Alpha = .05, df = 18 Alpha = .05, df = 120 Alpha = .01, df = 40 Know how to use a t table. What is ‘Alpha’? What are Degrees of Freedom (df)? What is a ‘Critical Value’?

Central Limit Theorem; variations in sampling distributions df = 120, t > ±1.98, p<.05 As samples sizes ( df ) go down… the estimated sampling distributions of t scores based on them have more variance, giving a more “flat” distribution. -2 -1 0 +1 +2 Z Score (standard deviation units) df = 18, t > ± 2.10, p<.05 df = 8, t > ± 2.31, p<.05 This increases the critical value for p<.05. Get this! -- Be able to go to a t table and apply this logic. Give yourself the Statistics focus modules for details.

Taking a correlation approach Correlations t-test We create group differences on the Independent Variable. …and assess how the groups differ on the Dependent Var. Difference between groups standard error of M Correlation; We measure individual differences on the predictor variable… and see if they are associated with differences on the outcome. Σ (Z var1* Z var2) Df (n-1)

Statistics summary: correlation Pearson Correlation (r): measures how similar the variance is between two variables (“shared variance”) within a group of participants. For each participant multiply the Z scores for the two variables Sum across all participants Divide by df: Σ (Z var1* Z var2) Df (n-1) r =

Multiple independent variables Testing hypotheses about > 1 independent variable Factorial Designs: Main effects, Additive Effects, Interactions

> 1 independent variable Include a ‘control’ variable as a second I.V. Block the data by gender, age, race, attitudes, etc. Test if the main Independent Variable has the same effect within both groups What is the effect of self-reflection on stress reduction? Hypothesis: training in self-reflection helps buffer the stress of exams. 2nd Question: is that effect the same in women and men? [old v. young, etc…] Main effect: Self-reflection training  less stress Interaction: training  less stress worked for women, not men. Conclusion: Including a ‘control’ variable helped clarify the results. E X A M P L E

> 1 independent variable Testing more than one Independent Variable Does each variable by itself significantly affect the outcome? Separate, ‘main effects’ of each I.V. Additive’ effects of 2+ I.V.s. Test interaction of 2 or more I.V.s What is the combined effect of these variables? Does the effect of each I.V. depend upon the other IV? Know the difference between a main effect, an additive effect, and an interaction.

Interaction example: Genetics, stress and depression Participants’ genotype and level of childhood trauma interact in depression. There is a general (main) effect whereby more trauma leads to greater likelihood of adult depression

Interaction example: Genetics, stress and depression, 2 However … the effect of trauma interacts with genetics So, the effect of maltreatment on adult depression depends on the level of a second variable, genetic disposition. Unless there is vulnerability there is no effect of trauma on later depression. Understand clearly why/how this is an interaction, not a main effect or additive effect. Also understand how the interaction tells us much more than the simple main effect. Childhood trauma has no effect in people who have no genetic vulnerability. The effect of different levels of trauma on depression depends on the person’s level of genetic vulnerability

Example of a 3-way interaction Figure 3 Mean ratings of subjective stimulation and sedation on the BAES under 0.65 g/kg alcohol and placebo in women and men. Here is another variation on an interaction effect. Alcohol (v. placebo) made men much more stimulated. Alcohol made women much more sedated Multiple independent variables

Alternate portrayal of 3-way mood interaction The alcohol conditions show a classic “cross-over” effect for gender & mood; Placebo conditions do not show much effect Men get aroused M BAES subscale scores Women get sedated

Alternate portrayal of 3-way mood interaction The alcohol conditions show a classic “cross-over” effect for gender & mood; Placebo conditions do not show much effect In this interaction the effect of alcohol on emotions had an opposite effect for men v. women. Men get aroused, women get sedated. In both examples the effect of one IV on the outcome depends upon a second (or third) IV. Men get aroused M BAES subscale scores Women get sedated

Multiple Independent Variables / Predictors: Multiple IVs; summary 2 Multiple Independent Variables / Predictors: Are critical to theory development and testing: Stress or other environmental events can “switch on” genes that create psychological or other problems; genetic dispositions and environment are not separate processes. Establish key “boundary conditions” to theory: when and among whom does a basic psychological process operate? Alcohol makes it more difficult to inhibit behavior, but primarily among men.

Basics: Science = values, not just content and methods Summary Basics: Science = values, not just content and methods Ways of knowing: Authority Intuition Empiricism Rationalism Internal Validity External Validity Threats to Internal Validity (from lack of control group) Most central to Science Internal  external validity tradeoff

Sampling: Who is the target population? Summary Sampling: Who is the target population? How do you decide who is a member? Probability sampling Random element to sampling Most representative Non-Probability sampling Less representative Best for highly targeted sub-populations

The numbers: Problems determining causality with correlations Summary The numbers: Problems determining causality with correlations Number scales Ratio Interval Ordinal Categorical Distributions Bi-modal Skewed Normal

The numbers: Measures of Central Tendency Standard Deviation (S) Summary The numbers: Measures of Central Tendency Mode Median Mean Standard Deviation (S) Know Z Statistical Significance Critical Ratio Best for bi-modal data. Best for highly skewed data. Best for normally distributed data. Z = X–M / S

The numbers: 5% of scores greater than Z = +1.98 Summary The numbers: 5% of scores greater than Z = +1.98 1.98 is the Critical Value … at alpha = p < .05 t + 1.98 occurs less than 5% of the time by chance alone It is “statisticallly significant”. Central Limit Theorem Adjust critical value for greater variance stemming from smaller samples (fewer df) t – table shows critial values for different sample dfs.

The numbers: Statistical effects Main effect Additive effect Summary The numbers: Statistical effects Main effect Additive effect Interaction