Probability Theory Summary

Slides:



Advertisements
Similar presentations
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Advertisements

Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 5 Discrete Random Variables and Probability Distributions
ฟังก์ชั่นการแจกแจงความน่าจะเป็น แบบไม่ต่อเนื่อง Discrete Probability Distributions.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
The General Linear Model. The Simple Linear Model Linear Regression.
Review of Basic Probability and Statistics
Chapter 4 Discrete Random Variables and Probability Distributions
Chapter 1 Probability Theory (i) : One Random Variable
Chapter 10 Simple Regression.
Probability Densities
Review.
Simulation Modeling and Analysis
Chapter 7 Sampling and Sampling Distributions
Probability Distributions
Programme in Statistics (Courses and Contents). Elementary Probability and Statistics (I) 3(2+1)Stat. 101 College of Science, Computer Science, Education.
Stats Probability Theory.
Evaluating Hypotheses
1 Engineering Computation Part 5. 2 Some Concepts Previous to Probability RANDOM EXPERIMENT A random experiment or trial can be thought of as any activity.
Chapter 11 Multiple Regression.
Part III: Inference Topic 6 Sampling and Sampling Distributions
Inferences About Process Quality
Jointly distributed Random variables
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
The Neymann-Pearson Lemma Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  ) where  is either  1 or  2. Let g(x 1, …,
Review of Probability.
Problem A newly married couple plans to have four children and would like to have three girls and a boy. What are the chances (probability) their desire.
STAT 552 PROBABILITY AND STATISTICS II
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 4 and 5 Probability and Discrete Random Variables.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Moment Generating Functions
Random Sampling, Point Estimation and Maximum Likelihood.
Mid-Term Review Final Review Statistical for Business (1)(2)
Probability The definition – probability of an Event Applies only to the special case when 1.The sample space has a finite no.of outcomes, and 2.Each.
Theory of Probability Statistics for Business and Economics.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Engineering Probability and Statistics Dr. Leonore Findsen Department of Statistics.
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Continuous Distributions The Uniform distribution from a to b.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Basic Business Statistics.
BINOMIALDISTRIBUTION AND ITS APPLICATION. Binomial Distribution  The binomial probability density function –f(x) = n C x p x q n-x for x=0,1,2,3…,n for.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
The two way frequency table The  2 statistic Techniques for examining dependence amongst two categorical variables.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Lecture 3: Statistics Review I Date: 9/3/02  Distributions  Likelihood  Hypothesis tests.
Probability Review-1 Probability Review. Probability Review-2 Probability Theory Mathematical description of relationships or occurrences that cannot.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
Review Lecture 51 Tue, Dec 13, Chapter 1 Sections 1.1 – 1.4. Sections 1.1 – 1.4. Be familiar with the language and principles of hypothesis testing.
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
IE 300, Fall 2012 Richard Sowers IESE. 8/30/2012 Goals: Rules of Probability Counting Equally likely Some examples.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Probability Theory Modelling random phenomena. Permutations the number of ways that you can order n objects is: n! = n(n-1)(n-2)(n-3)…(3)(2)(1) Definition:
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Stats Term Test 4 Solutions. c) d) An alternative solution is to use the probability mass function and.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Functions of Random Variables
Chapter 4 Discrete Random Variables and Probability Distributions
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate accuracy.
Stats 242.3(02) Statistical Theory and Methodology.
MAT 446 Supplementary Note for Ch 3
Random Variables.
Appendix A: Probability Theory
Review of Probabilities and Basic Statistics
Continuous Distributions
Introductory Statistics
Presentation transcript:

Probability Theory Summary Stats 241.3 Probability Theory Summary

Probability

Axioms of Probability A probability measure P is defined on S by defining for each event E, P[E] with the following properties P[E] ≥ 0, for each E. P[S] = 1.

Finite uniform probability space Many examples fall into this category Finite number of outcomes All outcomes are equally likely To handle problems in case we have to be able to count. Count n(E) and n(S).

Techniques for counting

Basic Rule of counting Suppose we carry out k operations in sequence Let n1 = the number of ways the first operation can be performed ni = the number of ways the ith operation can be performed once the first (i - 1) operations have been completed. i = 2, 3, … , k Then N = n1n2 … nk = the number of ways the k operations can be performed in sequence.

Basic Counting Formulae Permutations: How many ways can you order n objects n! Permutations of size k (< n): How many ways can you choose k objects from n objects in a specific order

Combinations of size k ( ≤ n): A combination of size k chosen from n objects is a subset of size k where the order of selection is irrelevant. How many ways can you choose a combination of size k objects from n objects (order of selection is irrelevant)

Important Notes In combinations ordering is irrelevant. Different orderings result in the same combination. In permutations order is relevant. Different orderings result in the different permutations.

Rules of Probability

The additive rule P[A  B] = P[A] + P[B] – P[A  B] and if P[A  B] = f

The additive rule for more than two events and if Ai  Aj = f for all i ≠ j. then

The Rule for complements for any event E

Conditional Probability, Independence and The Multiplicative Rule

The conditional probability of A given B is defined to be:

The multiplicative rule of probability and if A and B are independent. This is the definition of independence

The multiplicative rule for more than two events

Independence for more than 2 events

The set of k events A1, A2, … , Ak are called mutually independent if: Definition: The set of k events A1, A2, … , Ak are called mutually independent if: P[Ai1 ∩ Ai2 ∩… ∩ Aim] = P[Ai1] P[Ai2] …P[Aim] For every subset {i1, i2, … , im } of {1, 2, …, k } i.e. for k = 3 A1, A2, … , Ak are mutually independent if: P[A1 ∩ A2] = P[A1] P[A2], P[A1 ∩ A3] = P[A1] P[A3], P[A2 ∩ A3] = P[A2] P[A3], P[A1 ∩ A2 ∩ A3] = P[A1] P[A2] P[A3]

The set of k events A1, A2, … , Ak are called pairwise independent if: Definition: The set of k events A1, A2, … , Ak are called pairwise independent if: P[Ai ∩ Aj] = P[Ai] P[Aj] for all i and j. i.e. for k = 3 A1, A2, … , Ak are pairwise independent if: P[A1 ∩ A2] = P[A1] P[A2], P[A1 ∩ A3] = P[A1] P[A3], P[A2 ∩ A3] = P[A2] P[A3], It is not necessarily true that P[A1 ∩ A2 ∩ A3] = P[A1] P[A2] P[A3]

Bayes Rule for probability

An generalization of Bayes Rule Let A1, A2 , … , Ak denote a set of events such that for all i and j. Then

an important concept in probability Random Variables an important concept in probability

A random variable , X, is a numerical quantity whose value is determined be a random experiment

Definition – The probability function, p(x), of a random variable, X. For any random variable, X, and any real number, x, we define where {X = x} = the set of all outcomes (event) with X = x. For continuous random variables p(x) = 0 for all values of x.

Definition – The cumulative distribution function, F(x), of a random variable, X. For any random variable, X, and any real number, x, we define where {X ≤ x} = the set of all outcomes (event) with X ≤ x.

Discrete Random Variables For a discrete random variable X the probability distribution is described by the probability function p(x), which has the following properties

Graph: Discrete Random Variable p(x) b a

Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following properties : f(x) ≥ 0

Graph: Continuous Random Variable probability density function, f(x)

The distribution function F(x) This is defined for any random variable, X. F(x) = P[X ≤ x] Properties F(-∞) = 0 and F(∞) = 1. F(x) is non-decreasing (i. e. if x1 < x2 then F(x1) ≤ F(x2) ) F(b) – F(a) = P[a < X ≤ b].

p(x) = P[X = x] =F(x) – F(x-) Here If p(x) = 0 for all x (i.e. X is continuous) then F(x) is continuous.

For Discrete Random Variables F(x) is a non-decreasing step function with F(x) p(x)

For Continuous Random Variables Variables F(x) is a non-decreasing continuous function with F(x) f(x) slope x To find the probability density function, f(x), one first finds F(x) then

Some Important Discrete distributions

The Bernoulli distribution

Suppose that we have a experiment that has two outcomes Success (S) Failure (F) These terms are used in reliability testing. Suppose that p is the probability of success (S) and q = 1 – p is the probability of failure (F) This experiment is sometimes called a Bernoulli Trial Let Then

The probability distribution with probability function is called the Bernoulli distribution p q = 1- p

The Binomial distribution

We observe a Bernoulli trial (S,F) n times. Let X denote the number of successes in the n trials. Then X has a binomial distribution, i. e. where p = the probability of success (S), and q = 1 – p = the probability of failure (F)

The Poisson distribution Suppose events are occurring randomly and uniformly in time. Let X be the number of events occuring in a fixed period of time. Then X will have a Poisson distribution with parameter l.

The Geometric distribution Suppose a Bernoulli trial (S,F) is repeated until a success occurs. X = the trial on which the first success (S) occurs. The probability function of X is: p(x) =P[X = x] = (1 – p)x – 1p = p qx - 1

The Negative Binomial distribution Suppose a Bernoulli trial (S,F) is repeated until k successes occur. Let X = the trial on which the kth success (S) occurs. The probability function of X is:

The Hypergeometric distribution Suppose we have a population containing N objects. Suppose the elements of the population are partitioned into two groups. Let a = the number of elements in group A and let b = the number of elements in the other group (group B). Note N = a + b. Now suppose that n elements are selected from the population at random. Let X denote the elements from group A. The probability distribution of X is

Continuous Distributions

Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following properties : f(x) ≥ 0

Graph: Continuous Random Variable probability density function, f(x)

Continuous Distributions The Uniform distribution from a to b

The Normal distribution (mean m, standard deviation s)

The Exponential distribution

The Weibull distribution A model for the lifetime of objects that do age.

The Weibull distribution with parameters a and b.

The Weibull density, f(x) (a = 0.9, b = 2) (a = 0.7, b = 2) (a = 0.5, b = 2)

The Gamma distribution An important family of distributions

The Gamma distribution Let the continuous random variable X have density function: Then X is said to have a Gamma distribution with parameters a and l.

Graph: The gamma distribution (a = 2, l = 0.9) (a = 2, l = 0.6) (a = 3, l = 0.6)

Contained within this family are other distributions Comments The set of gamma distributions is a family of distributions (parameterized by a and l). Contained within this family are other distributions The Exponential distribution – in this case a = 1, the gamma distribution becomes the exponential distribution with parameter l. The exponential distribution arises if we are measuring the lifetime, X, of an object that does not age. It is also used a distribution for waiting times between events occurring uniformly in time. The Chi-square distribution – in the case a = n/2 and l = ½, the gamma distribution becomes the chi- square (c2) distribution with n degrees of freedom. Later we will see that sum of squares of independent standard normal variates have a chi-square distribution, degrees of freedom = the number of independent terms in the sum of squares.

Expectation

Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected value of X, E(X) is defined to be: and if X is continuous with probability density function f(x)

Expectation of functions Let X denote a discrete random variable with probability function p(x) then the expected value of X, E[g (X)] is defined to be: and if X is continuous with probability density function f(x)

Moments of a Random Variable

the kth moment of X : The first moment of X , m = m1 = E(X) is the center of gravity of the distribution of X. The higher moments give different information regarding the distribution of X.

the kth central moment of X

Moment generating functions

Definition Let X denote a random variable, Then the moment generating function of X , mX(t) is defined by:

Properties mX(0) = 1

Let X be a random variable with moment generating function mX(t) Let X be a random variable with moment generating function mX(t). Let Y = bX + a Then mY(t) = mbX + a(t) = E(e [bX + a]t) = eatE(e X[ bt ]) = eatmX (bt) Let X and Y be two independent random variables with moment generating function mX(t) and mY(t) . Then mX+Y(t) = E(e [X + Y]t) = E(e Xt e Yt) = E(e Xt) E(e Yt) = mX (t) mY (t)

Let X and Y be two random variables with moment generating function mX(t) and mY(t) and two distribution functions FX(x) and FY(y) respectively. Let mX (t) = mY (t) then FX(x) = FY(x). This ensures that the distribution of a random variable can be identified by its moment generating function

M. G. F.’s - Continuous distributions

M. G. F.’s - Discrete distributions

Note: The distribution of a random variable X can be described by:

Jointly distributed Random variables Multivariate distributions

Discrete Random Variables

The joint probability function; p(x,y) = P[X = x, Y = y]

Continuous Random Variables

Definition: Two random variable are said to have joint probability density function f(x,y) if

Marginal and conditional distributions

Marginal Distributions (Discrete case): Let X and Y denote two random variables with joint probability function p(x,y) then the marginal density of X is the marginal density of Y is

Marginal Distributions (Continuous case): Let X and Y denote two random variables with joint probability density function f(x,y) then the marginal density of X is the marginal density of Y is

Conditional Distributions (Discrete Case): Let X and Y denote two random variables with joint probability function p(x,y) and marginal probability functions pX(x), pY(y) then the conditional density of Y given X = x conditional density of X given Y = y

Conditional Distributions (Continuous Case): Let X and Y denote two random variables with joint probability density function f(x,y) and marginal densities fX(x), fY(y) then the conditional density of Y given X = x conditional density of X given Y = y

The bivariate Normal distribution

Let where This distribution is called the bivariate Normal distribution. The parameters are m1, m2 , s1, s2 and r.

Surface Plots of the bivariate Normal distribution

Marginal distributions The marginal distribution of x1 is Normal with mean m1 and standard deviation s1. The marginal distribution of x2 is Normal with mean m2 and standard deviation s2.

Conditional distributions The conditional distribution of x1 given x2 is Normal with: mean and standard deviation The conditional distribution of x2 given x1 is Normal with: mean and standard deviation

Independence

Definition: Two random variables X and Y are defined to be independent if if X and Y are discrete if X and Y are continuous

multivariate distributions k ≥ 2

Definition Let X1, X2, …, Xn denote n discrete random variables, then p(x1, x2, …, xn ) is joint probability function of X1, X2, …, Xn if

Definition Let X1, X2, …, Xk denote k continuous random variables, then f(x1, x2, …, xk ) is joint density function of X1, X2, …, Xk if

The Multinomial distribution Suppose that we observe an experiment that has k possible outcomes {O1, O2, …, Ok } independently n times. Let p1, p2, …, pk denote probabilities of O1, O2, …, Ok respectively. Let Xi denote the number of times that outcome Oi occurs in the n repetitions of the experiment.

is called the Multinomial distribution The joint probability function of: is called the Multinomial distribution

The Multivariate Normal distribution Recall the univariate normal distribution the bivariate normal distribution

The k-variate Normal distribution where

Marginal distributions

Definition Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function p(x1, x2, …, xq, xq+1 …, xk ) then the marginal joint probability function of X1, X2, …, Xq is

Definition Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xq, xq+1 …, xk ) then the marginal joint probability function of X1, X2, …, Xq is

Conditional distributions

Definition Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function p(x1, x2, …, xq, xq+1 …, xk ) then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is

Definition Definition Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xq, xq+1 …, xk ) then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is

Definition – Independence of sets of vectors Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xq, xq+1 …, xk ) then the variables X1, X2, …, Xq are independent of Xq+1, …, Xk if A similar definition for discrete random variables.

Definition – Mutual Independence Let X1, X2, …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xk ) then the variables X1, X2, …, Xk are called mutually independent if A similar definition for discrete random variables.

for multivariate distributions Expectation for multivariate distributions

Definition Let X1, X2, …, Xn denote n jointly distributed random variable with joint density function f(x1, x2, …, xn ) then

Some Rules for Expectation

The Linearity property Thus you can calculate E[Xi] either from the joint distribution of X1, … , Xn or the marginal distribution of Xi. The Linearity property

(The Multiplicative property) Suppose X1, … , Xq are independent of Xq+1, … , Xk then In the simple case when k = 2 if X and Y are independent

Some Rules for Variance

Tchebychev’s inequality Ex:

Note: If X and Y are independent, then

The correlation coefficient rXY 2. if there exists a and b such that where rXY = +1 if b > 0 and rXY = -1 if b< 0

Some other properties of variance

Variance: Multiplicative Rule for independent random variables Suppose that X and Y are independent random variables, then:

Mean and Variance of averages Let X1, … , Xn be n mutually independent random variables each having mean m and standard deviation s (variance s2). Let Then and

The Law of Large Numbers Let X1, … , Xn be n mutually independent random variables each having mean m. Let Then for any d > 0 (no matter how small)

Conditional Expectation:

Definition Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xq, xq+1 …, xk ) then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is

Definition Let U = h( X1, X2, …, Xq, Xq+1 …, Xk ) then the Conditional Expectation of U given Xq+1 = xq+1 , …, Xk = xk is Note this will be a function of xq+1 , …, xk.

A very useful rule Let (x1, x2, … , xq, y1, y2, … , ym) = (x, y) denote q + m random variables. Then

Functions of Random Variables

Methods for determining the distribution of functions of Random Variables Distribution function method Moment generating function method Transformation method

Distribution function method Let X, Y, Z …. have joint density f(x,y,z, …) Let W = h( X, Y, Z, …) First step Find the distribution function of W G(w) = P[W ≤ w] = P[h( X, Y, Z, …) ≤ w] Second step Find the density function of W g(w) = G'(w).

Use of moment generating functions Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …). Identify the distribution of W from its moment generating function This procedure works well for sums, linear combinations, averages etc.

Let x1, x2, … denote a sequence of independent random variables Sums Let S = x1 + x2 + … + xn then Linear Combinations Let L = a1x1 + a2x2 + … + anxn then

Arithmetic Means Let x1, x2, … denote a sequence of independent random variables coming from a distribution with moment generating function m(t)

The Transformation Method Theorem Let X denote a random variable with probability density function f(x) and U = h(X). Assume that h(x) is either strictly increasing (or decreasing) then the probability density of U is:

The Transfomation Method (many variables) Theorem Let x1, x2,…, xn denote random variables with joint probability density function f(x1, x2,…, xn ) Let u1 = h1(x1, x2,…, xn). u2 = h2(x1, x2,…, xn). un = hn(x1, x2,…, xn). define an invertible transformation from the x’s to the u’s

Then the joint probability density function of u1, u2,…, un is given by: where Jacobian of the transformation

Some important results Distribution of functions of random variables

The method used to derive these results will be indicated by: DF - Distribution Function Method. MGF - Moment generating function method TF - Transformation method

Student’s t distribution Let Z and U be two independent random variables with: Z having a Standard Normal distribution and U having a c2 distribution with n degrees of freedom then the distribution of: is: DF

The Chi-square distribution Let Z1, Z2, … , Zv be v independent random variables having a Standard Normal distribution, then has a c2 distribution with n degrees of freedom. for n = 1 DF for n > 1 MGF

Distribution of the sample mean Let x1, x2, …, xn denote a sample from the normal distribution with mean m and variance s2. then has a Normal distribution with: MGF

The Central Limit theorem If x1, x2, …, xn is a sample from a distribution with mean m, and standard deviations s, then if n is large has a normal distribution with mean and variance MGF

Distribution of sums of Gamma R. V.’s Let X1, X2, … , Xn denote n independent random variables each having a gamma distribution with parameters (l,ai), i = 1, 2, …, n. Then W = X1 + X2 + … + Xn has a gamma distribution with parameters (l, a1 + a2 +… + an). MGF Distribution of a multiple of a Gamma R. V. Suppose that X is a random variable having a gamma distribution with parameters (l,a). Then W = aX has a gamma distribution with parameters (l/a, a). MGF

Distribution of sums of Binomial R. V.’s Let X1, X2, … , Xk denote k independent random variables each having a binomial distribution with parameters (p,ni), i = 1, 2, …, k. Then W = X1 + X2 + … + Xk has a binomial distribution with parameters (p, n1 + n2 +… + nk). MGF Distribution of sums of Negative Binomial R. V.’s Let X1, X2, … , Xn denote n independent random variables each having a negative binomial distribution with parameters (p,ki), i = 1, 2, …, n. Then W = X1 + X2 + … + Xn has a negative binomial distribution with parameters (p, k1 + k2 +… + kn). MGF

Courses that can be taken after Stats 241 Beyond Stats 241 Courses that can be taken after Stats 241

Statistics

What is Statistics? It is the major mathematical tool of scientific inference – methods for drawing conclusion from data. Data that is to some extent corrupted by some component of random variation (random noise)

In both Statistics and Probability theory we are concerned with studying random phenomena

In probability theory The model is known and we are interested in predicting the outcomes and observations of the phenomena. outcomes and observations model

In statistics The model is unknown the outcomes and observations of the phenomena have been observed. We are interested in determining the model from the observations outcomes and observations model

Example - Probability A coin is tossed n = 100 times We are interested in the observation, X, the number of times the coin is a head. Assuming the coin is balanced (i.e. p = the probability of a head = ½.)

Example - Statistics We are interested in the success rate, p, of a new surgical procedure. The procedure is performed n = 100 times. X, the number of successful times the procedure is performed is 82. The success rate p is unknown.

If the success rate p was known. Then This equation allows us to predict the value of the observation, X.

In the case when the success rate p was unknown. Then the following equation is still true the success rate We will want to use the value of the observation, X = 82 to make a decision regarding the value of p.

Introductory Statistics Courses Non calculus Based Stats 244 Introductory Statistics Courses Non calculus Based Stats 244.3 Stats 245.3 Calculus Based Stats 242.3

Stats 244.3 Statistical concepts and techniques including  graphing of distributions,  measures of location and variability,  measures of association,  regression,  probability,  confidence intervals,  hypothesis testing.  Students should consult with their department before  enrolling in this course to determine the status of this  course in their program.  Prerequisite(s): A course in a social science or Mathematics A30. 

Stats 245.3 An introduction to basic statistical methods  including frequency distributions, elementary probability, confidence intervals and tests of  significance, analysis of variance, regression  and correlation, contingency tables, goodness of fit.  Prerequisite(s): MATH 100, 101, 102, 110 or STAT 103. 

Stats 242.3 Sampling theory, estimation, confidence intervals,  testing hypotheses, goodness of fit,  analysis of variance,  regression and correlation.  Prerequisite(s):MATH 110, 116 and STAT 241. 

Stats 244 and 245 do not require a calculus prerequisite are Recipe courses Stats 242 does require calculus and probability (Stats 241) as a prerequisite More theoretical class – You learn techniques for developing statistical procedures and thoroughly investigating the properties of these procedures

Statistics Courses beyond Stats 242.3

STAT 341.3 Probability and Stochastic Processes 1/2(3L-1P) Prerequisite(s): STAT 241. Random variables and their distributions; independence; moments and moment generating functions; conditional probability; Markov chains; stationary time-series.

STAT 342.3 Mathematical Statistics 1(3L-1P) Prerequisite(s): MATH 225 or 276; STAT 241 and 242. Probability spaces; conditional probability and independence; discrete and continuous random variables; standard probability models; expectations; moment generating functions; sums and functions of random variables; sampling distributions; asymptotic distributions. Deals with basic probability concepts at a moderately rigorous level. Note: Students with credit for STAT 340 may not take this course for credit.

STAT 344.3 Applied Regression Analysis 1/2(3L-1P) Prerequisite(s): STAT 242 or 245 or 246 or a comparable course in statistics. Applied regression analysis involving the extensive use of computer software. Includes: linear regression; multiple regression; stepwise methods; residual analysis; robustness considerations; multicollinearity; biased procedures; non-linear regression. Note: Students with credit for ECON 404 may not take this course for credit. Students with credit for STAT 344 will receive only half credit for ECON 404.

STAT 345.3 Design and Analysis of Experiments 1/2(3L-1P) Prerequisite(s): STAT 242 or 245 or 246 or a comparable course in statistics. An introduction to the principles of experimental design and analysis of variance. Includes: randomization, blocking, factorial experiments, confounding, random effects, analysis of covariance. Emphasis will be on fundamental principles and data analysis techniques rather than on mathematical theory.

STAT 346.3 Multivariate Analysis 1/2(3L-1P) Prerequisite(s): MATH 266, STAT 241, and 344 or 345. The multivariate normal distribution, multivariate analysis of variance, discriminant analysis, classification procedures, multiple covariance analysis, factor analysis, computer applications.

STAT 347.3 Non Parametric Methods 1/2(3L-1P) Prerequisite(s): STAT 242 or 245 or 246 or a comparable course in statistics. An introduction to the ideas and techniques of non-parametric analysis. Includes: one, two and K samples problems, goodness of fit tests, randomness tests, and correlation and regression.

STAT 348.3 Sampling Techniques 1/2(3L-1P) Prerequisite(s): STAT 242 or 245 or 246 or a comparable course in statistics. Theory and applications of sampling from finite populations. Includes: simple random sampling, stratified random sampling, cluster sampling, systematic sampling, probability proportionate to size sampling, and the difference, ratio and regression methods of estimation.

STAT 349.3 Time Series Analysis 1/2(3L-1P) Prerequisite(s): STAT 241, and 344 or 345. An introduction to statistical time series analysis. Includes: trend analysis, seasonal variation, stationary and non-stationary time series models, serial correlation, forecasting and regression analysis of time series data.

STAT 442.3 Statistical Inference 2(3L-1P) Prerequisite(s): STAT 342. Parametric estimation, maximum likelihood estimators, unbiased estimators, UMVUE, confidence intervals and regions, tests of hypotheses, Neyman Pearson Lemma, generalized likelihood ratio tests, chi-square tests, Bayes estimators.

STAT 443.3 Linear Statistical Models 2(3L-1P) Prerequisite(s): MATH 266, STAT 342, and 344 or 345. A rigorous examination of the general linear model using vector space theory. Includes: generalized inverses; orthogonal projections; quadratic forms; Gauss-Markov theorem and its generalizations; BLUE estimators; Non-full rank models; estimability considerations.