Presentation is loading. Please wait.

Presentation is loading. Please wait.

Basic Probability and Probability Distributions

Similar presentations


Presentation on theme: "Basic Probability and Probability Distributions"— Presentation transcript:

1 Basic Probability and Probability Distributions
Chapter 4 Basic Probability and Probability Distributions

2 Probability Terminology
Classical Interpretation: Notion of probability based on equal likelihood of individual possibilities (coin toss has 1/2 chance of Heads, card draw has 4/52 chance of an Ace). Origins in games of chance. Outcome: Distinct result of random process (N= # outcomes) Event: Collection of outcomes (Ne= # of outcomes in event) Probability of event E: P(event E) = Ne/N Relative Frequency Interpretation: If an experiment were conducted repeatedly, what fraction of time would event of interest occur (based on empirical observation) Subjective Interpretation: Personal view (possibly based on external info) of how likely a one-shot experiment will end in event of interest

3 Obtaining Event Probabilities
Classical Approach List all N possible outcomes of experiment List all Ne outcomes corresponding to event of interest (E) P(event E) = Ne/N Relative Frequency Approach Define event of interest Conduct experiment repeatedly (often using computer) Measure the fraction of time event E occurs Subjective Approach Obtain as much information on process as possible Consider different outcomes and their likelihood When possible, monitor your skill (e.g. stocks, weather)

4 Basic Counting Rules

5 Basic Probability and Rules
A,B  Events of interest P(A), P(B)  Event probabilities Union: Event either A or B occurs (A  B) Mutually Exclusive: A, B cannot occur at same time If A,B are mutually exclusive: P(either A or B) = P(A) + P(B) Complement of A: Event that A does not occur (Ā) P(Ā) = 1- P(A) That is: P(A) + P(Ā) = 1 Intersection: Event both A and B occur (A  B or AB) P (A  B) = P(A) + P(B) - P(AB)

6 Conditional Probability and Independence
Unconditional/Marginal Probability: Frequency which event occurs in general (given no additional info). P(A) Conditional Probability: Probability an event (A) occurs given knowledge another event (B) has occurred. P(A|B) Independent Events: Events whose unconditional and conditional (given the other) probabilities are the same

7 John Snow London Cholera Death Study
2 Water Companies (Let D be the event of death): Southwark&Vauxhall (S): customers, 3702 deaths Lambeth (L): customers, 407 deaths Overall: customers, 4109 deaths Note that probability of death is almost 6 times higher for S&V customers than Lambeth customers (was important in showing how cholera spread)

8 John Snow London Cholera Death Study
Contingency Table with joint probabilities (in body of table) and marginal probabilities (on edge of table)

9 John Snow London Cholera Death Study
Company .0140 D (.0085) S&V .6072 DC (.5987) .9860 WaterUser .0024 D (.0009) .3928 L DC (.3919) .9976 Tree Diagram obtaining joint probabilities by multiplication rule

10 Bayes’s Rule - Updating Probabilities
Let A1,…,Ak be a set of events that partition a sample space such that (mutually exclusive and exhaustive): each set has known P(Ai) > 0 (each event can occur) for any 2 sets Ai and Aj, P(Ai and Aj) = 0 (events are disjoint) P(A1) + … + P(Ak) = 1 (each outcome belongs to one of events) If C is an event such that 0 < P(C) < 1 (C can occur, but will not necessarily occur) We know the probability will occur given each event Ai: P(C|Ai) Then we can compute probability of Ai given C occurred:

11 Northern Army at Gettysburg
Regiments: partition of soldiers (A1,…,A9). Casualty: event C P(Ai) = (size of regiment) / (total soldiers) = (Column 3)/95369 P(C|Ai) = (# casualties) / (regiment size) = (Col 4)/(Col 3) P(C|Ai) P(Ai) = P(Ai and C) = (Col 5)*(Col 6) P(C)=sum(Col 7) P(Ai|C) = P(Ai and C) / P(C) = (Col 7)/.2416

12 Example - OJ Simpson Trial
Given Information on Blood Test (T+/T-) Sensitivity: P(T+|Guilty)=1 Specificity: P(T-|Innocent)=.9957  P(T+|I)=.0043 Suppose you have a prior belief of guilt: P(G)=p* What is “posterior” probability of guilt after seeing evidence that blood matches: P(G|T+)? Source: B.Forst (1996). “Evidence, Probabilities and Legal Standards for Determination of Guilt: Beyond the OJ Trial”, in Representing OJ: Murder, Criminal Justice, and the Mass Culture, ed. G. Barak pp Harrow and Heston, Guilderland, NY

13

14 Random Variables/Probability Distributions
Random Variable: Outcome characteristic that is not known prior to experiment/observation Qualitative Variables: Characteristics that are non-numeric (e.g. gender, race, religion, severity) Quantitative Variables: Characteristics that are numeric (e.g. height, weight, distance) Discrete: Takes on only a countable set of possible values Continuous: Takes on values along a continuum Probability Distribution: Numeric description of outcomes of a random variable takes on, and their corresponding probabilities (discrete) or densities (continuous)

15 Discrete Random Variables
Discrete RV: Can take on a finite (or countably infinite) set of possible outcomes Probability Distribution: List of values a random variable can take on and their corresponding probabilities Individual probabilities must lie between 0 and 1 Probabilities sum to 1 Notation: Random variable: Y Values Y can take on: y1, y2, …, yk Probabilities: P(Y=y1) = p1 … P(Y=yk) = pk p1 + … + pk = 1

16 Example: Wars Begun by Year (1482-1939)
Distribution of Numbers of wars started by year Y = # of wars stared in randomly selected year Levels: y1=0, y2=1, y3=2, y4=3, y5=4 Probability Distribution:

17 Masters Golf Tournament 1st Round Scores

18 Means and Variances of Random Variables
Mean: Long-run average a random variable will take on (also the balance point of the probability distribution) Expected Value is another term, however we really do not expect that a realization of X will necessarily be close to its mean. Notation: E(X) Mean and Variance of a discrete random variable:

19 Rules for Means Linear Transformations: a + bY (where a and b are constants): E(a+bY) = ma+bY = a + bmY Sums of random variables: X + Y (where X and Y are random variables): E(X+Y) = mX+Y = mX + mY Linear Functions of Random Variables: E(a1Y1++anYn) = a1m1+…+anmn where E(Yi)=mi

20 Example: Masters Golf Tournament
Mean by Round (Note ordering): m1= m2=73.07 m3= m4=73.91 Mean Score per hole (18) for round 1: E((1/18)X1) = (1/18)m1 = (1/18)73.54 = 4.09 Mean Score versus par (72) for round 1: E(X1-72) = mX1-72 = = (1.54 over par) Mean Difference (Round 1 - Round 4): E(X1-X4) = m1 - m4 = = -0.37 Mean Total Score: E(X1+X2+X3+X4) = m1+ m2+ m3+ m4 = = = (6.28 over par)

21 Variance of a Random Variable
Special Cases: X and Y are independent (outcome of one does not alter the distribution of the other): r = 0, last term drops out a=b=1 and r = V(X+Y) = sX2 + sY2 a=1 b= -1 and r = V(X-Y) = sX2 + sY2 a=b=1 and r 0 V(X+Y) = sX2 + sY2 + 2rsXsY a=1 b= -1 and r 0 V(X-Y) = sX2 + sY2 -2rsXsY

22 Examples - Wars & Masters Golf

23 Binomial Distribution for Sample Counts
Binomial “Experiment” Consists of n trials or observations Trials/observations are independent of one another Each trial/observation can end in one of two possible outcomes often labelled “Success” and “Failure” The probability of success, p, is constant across trials/observations Random variable, Y, is the number of successes observed in the n trials/observations. Binomial Distributions: Family of distributions for Y, indexed by Success probability (p) and number of trials/observations (n). Notation: Y~B(n,p)

24 Binomial Distributions and Sampling
Problem when sampling from a finite population: the sequence of probabilities of Success is altered after observing earlier individuals. When the population is much larger than the sample (say at least 20 times as large), the effect is minimal and we say X is approximately binomial Obtaining probabilities:

25 Example - Diagnostic Test
Test claims to have a sensitivity of 90% (Among people with condition, probability of testing positive is .90) 10 people who are known to have condition are identified, Y is the number that correctly test positive Table obtained in EXCEL with function: BINOM.DIST(k,n,p,FALSE) (TRUE option gives cumulative distribution function: P(Yk)

26 Binomial Mean & Standard Deviation
Let Si=1 if the ith individual was a success, 0 otherwise Then P(Si=1) = p and P(Si=0) = 1-p Then E(Si)=mS = 1(p) + 0(1-p) = p Note that Y = S1+…+Sn and that trials are independent Then E(Y)=mY = nmS = np V(Si) = E(Si2)-mS2 = p-p2 = p(1-p) Then V(Y)=sY2 = np(1-p)

27 Poisson Distribution for Event Counts
Distribution related to Binomial for Counts of number of events occurring in fixed time or space. Takes many “sub-intervals” and assumes Binomial (n=1) distribution for events in each. The average number of events in unit time or space is m. In general, for length t, mean is mt Table 14 (pp ) gives probabilities for selected m EXCEL Function: =POISSON.DIST(y,m,0) returns P(y)

28 Poisson Distribution for Event Counts

29 Continuous Random Variables
Variable can take on any value along a continuous range of numbers (interval) Probability distribution is described by a smooth density curve Probabilities of ranges of values for Y correspond to areas under the density curve Curve must lie on or above the horizontal axis Total area under the curve is 1 Special cases: Normal and Gamma distributions

30 Normal Distribution Bell-shaped, symmetric family of distributions
Classified by 2 parameters: Mean (m) and standard deviation (s). These represent location and spread Random variables that are approximately normal have the following properties wrt individual measurements: Approximately half (50%) fall above (and below) mean Approximately 68% fall within 1 standard deviation of mean Approximately 95% fall within 2 standard deviations of mean Virtually all fall within 3 standard deviations of mean Notation when Y is normally distributed with mean m and standard deviation s :

31 Two Normal Distributions

32 Normal Distribution

33 Example - Heights of U.S. Adults
Female and Male adult heights are well approximated by normal distributions: YF~N(63.7,2.5) YM~N(69.1,2.6) Source: Statistical Abstract of the U.S. (1992)

34 Standard Normal (Z) Distribution
Problem: Unlimited number of possible normal distributions (- < m <  , s > 0) Solution: Standardize the random variable to have mean 0 and standard deviation 1 Probabilities of certain ranges of values and specific percentiles of interest can be obtained through the standard normal (Z) distribution

35

36 Standard Normal (Z) Distribution
Table Area 1-Table Area z

37 2nd Decimal Place I n t g e r p a & 1st D c i m l

38 2nd Decimal Place I n t g e r p a & 1st D c i m l

39 Finding Probabilities of Specific Ranges
Step 1 - Identify the normal distribution of interest (e.g. its mean (m) and standard deviation (s) ) Step 2 - Identify the range of values that you wish to determine the probability of observing (yL , yU), where often the upper or lower bounds are  or - Step 3 - Transform yL and yU into Z-values: Step 4 - Obtain P(zL Z  zU) from Z-table (pp )

40 Example - Adult Female Heights
What is the probability a randomly selected female is 5’10” or taller (70 inches)? Step 1 - Y ~ N(63.7 , 2.5) Step 2 - yL = yU =  Step 3 - Step 4 - P(Y 70) = P(Z  2.52) = 1-P(Z2.52)= = (  1/170)

41 Finding Percentiles of a Distribution
Step 1 - Identify the normal distribution of interest (e.g. its mean (m) and standard deviation (s) ) Step 2 - Determine the percentile of interest 100p% (e.g. the 90th percentile is the cut-off where only 90% of scores are below and 10% are above). Step 3 - Find p in the body of the z-table and itscorresponding z-value (zp) on the outer edge: If 100p < 50 then use left-hand page of table If 100p 50 then use right-hand page of table Step 4 - Transform zp back to original units:

42 Example - Adult Male Heights
Above what height do the tallest 5% of males lie above? Step 1 - Y ~ N(69.1 , 2.6) Step 2 - Want to determine 95th percentile (p = .95) Step 3 - P(Z1.645) = .95 Step 4 - y.95 = (1.645)(2.6) = (6’,1.4”)

43 Assessing Normality and Transformations
Obtain a histogram and see if mound-shaped Obtain a normal probability plot Order data from smallest to largest and rank them (1 to n) Obtain a percentile for each: pct = (rank-0.375)/(n+0.25) Obtain the z-score corresponding to the percentile Plot observed data versus z-score, see if straight line (approx.) Transformations that can achieve approximate normality:

44 Chi-Square Distribution
Indexed by “degrees of freedom (n)” X~cn2 Z~N(0,1)  Z2 ~c12 Assuming Independence: Obtaining Probabilities in EXCEL: To obtain: 1-F(x)=P(X≥x) Use Function: =CHISQ.DIST.RT(x,n) Table 7 (pp ) Gives critical values for selected upper tail probabilities

45 Chi-Square Distributions

46 Critical Values for Chi-Square Distributions (Mean=n, Variance=2n)

47 Student’s t-Distribution
Indexed by “degrees of freedom (n)” X~tn Z~N(0,1), X~cn2 Assuming Independence of Z and X: Obtaining Probabilities in EXCEL: To obtain: 1-F(t)=P(T≥t) Use Function: =T.DIST.RT(t,n) Table 2 (p gives critical values for selected upper tail probs

48

49 Critical Values for Student’s t-Distributions

50 Assuming Independence of X1 and X2:
F-Distribution Indexed by 2 “degrees of freedom (n1,n2)” W~Fn1,n2 X1 ~cn12, X2 ~cn22 Assuming Independence of X1 and X2: Obtaining Probabilities in EXCEL: To obtain: 1-F(w)=P(W≥w) Use Function: =F.DIST.RT(w,n1,n2) Table 8 (pp ) gives critical values for selected upper tail probs

51

52 Critical Values for F-distributions P(F ≤ Table Value) = 0.95

53 Sampling Distributions
Distribution of a Sample Statistic: The probability distribution of a sample statistic obtained from a random sample or a randomized experiment What values can a sample mean (or proportion) take on and how likely are ranges of values? Population Distribution: Set of values for a variable for a population of individuals. Conceptually equivalent to probability distribution in sense of selecting an individual at random and observing their value of the variable of interest

54 Sampling Distribution of a Sample Mean
Obtain a sample of n independent measurements of a quantitative variable: Y1,…,Yn from a population with mean m and standard deviation s Averages will be less variable than the individual measurements Sampling distributions of averages will become more like a normal distribution as n increases (regardless of the shape of the population of individual measurements)

55 Central Limit Theorem When random samples of size n are selected from any population with mean m and finite standard deviation s, the sampling distribution of the sample mean will be approximately distributed for large n: Z-table can be used to approximate probabilities of ranges of values for sample means, as well as percentiles of their sampling distribution

56 Sample Proportions Counts of Successes (Y) rarely reported due to dependency on sample size (n) More common is to report the sample proportion of successes:

57 Sampling Distributions for Counts & Proportions
For samples of size n, counts (and thus proportions) can take on only n distinct possible outcomes As the sample size n gets large, so do the number of possible values, and sampling distribution begins to approximate a normal distribution. Common Rule of thumb: np  10 and n(1-p)  10 to use normal approximation

58 Sampling Distribution for Y~B(n=1000,p=0.2)

59 Using Z-Table for Approximate Probabilities
To find probabilities of certain ranges of counts or proportions, can make use of fact that the sample counts and proportions are approximately normally distributed for large sample sizes. Define range of interest Obtain mean of the sampling distribution Obtain standard deviation of sampling distribution Transform range of interest to range of Z-values Obtain (approximate) Probabilities from Z-table


Download ppt "Basic Probability and Probability Distributions"

Similar presentations


Ads by Google