P ROBABILITY T HEORY APPENDIX C P ROBABILITY T HEORY you can never know too much probability theory. If you are well grounded in probability theory, you.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

CHAPTER 13 M ODELING C ONSIDERATIONS AND S TATISTICAL I NFORMATION “All models are wrong; some are useful.”  George E. P. Box Organization of chapter.
Statistics review of basic probability and statistics.
APPENDIX B S OME B ASIC T ESTS IN S TATISTICS Organization of appendix in ISSO –Standard one-sample test P-values Confidence intervals –Basic two-sample.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Markov Chains 1.
Probability Theory STAT 312 STAT 312 Dr. Zakeia AlSaiary.
Observers and Kalman Filters
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Graduate School of Information Sciences, Tohoku University
Bayesian statistics – MCMC techniques
Introduction to stochastic process
06/05/2008 Jae Hyun Kim Chapter 2 Probability Theory (ii) : Many Random Variables Bioinformatics Tea Seminar: Statistical Methods in Bioinformatics.
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete random variables Probability mass function Distribution function (Secs )
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Probability theory 2011 Main topics in the course on probability theory  The concept of probability – Repetition of basic skills  Multivariate random.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
4. Convergence of random variables  Convergence in probability  Convergence in distribution  Convergence in quadratic mean  Properties  The law of.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Continuous random variables Uniform and Normal distribution (Sec. 3.1, )
Condition State Transitions and Deterioration Models H. Scott Matthews March 10, 2003.
The moment generating function of random variable X is given by Moment generating function.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Normal and Sampling Distributions A normal distribution is uniquely determined by its mean, , and variance,  2 The random variable Z = (X-  /  is.
Maximum Likelihood Estimation
CHAPTER 15 S IMULATION - B ASED O PTIMIZATION II : S TOCHASTIC G RADIENT AND S AMPLE P ATH M ETHODS Organization of chapter in ISSO –Introduction to gradient.
Statistical Hypothesis Testing. Suppose you have a random variable X ( number of vehicle accidents in a year, stock market returns, time between el nino.
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO* –Background Motivation Finite sample and asymptotic (continuous)
Overview course in Statistics (usually given in 26h, but now in 2h)  introduction of basic concepts of probability  concepts of parameter estimation.
CHAPTER 4 S TOCHASTIC A PPROXIMATION FOR R OOT F INDING IN N ONLINEAR M ODELS Organization of chapter in ISSO –Introduction and potpourri of examples Sample.
Chapter 14 Monte Carlo Simulation Introduction Find several parameters Parameter follow the specific probability distribution Generate parameter.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Random Numbers and Simulation  Generating truly random numbers is not possible Programs have been developed to generate pseudo-random numbers Programs.
MA-250 Probability and Statistics Nazar Khan PUCIT Lecture 26.
Lecture 2 Basics of probability in statistical simulation and stochastic programming Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius,
General ideas to communicate Dynamic model Noise Propagation of uncertainty Covariance matrices Correlations and dependencs.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
Chapter 3 Foundation of Mathematical Analysis § 3.1 Statistics and Probability § 3.2 Random Variables and Magnitude Distribution § 3.3 Probability Density.
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO –Background Motivation Finite sample and asymptotic (continuous)
Math 4030 – 6a Joint Distributions (Discrete)
Sampling and estimation Petter Mostad
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson0-1 Supplement 2: Comparing the two estimators of population variance by simulations.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Biostatistics Class 3 Probability Distributions 2/15/2000.
Large Sample Theory EC 532 Burak Saltoğlu.
Theme 7. Use of probability in psychological research
Lecture 3 B Maysaa ELmahi.
Math a Discrete Random Variables
Basic simulation methodology
Probability for Machine Learning
Main topics in the course on probability theory
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Assoc. Prof. Dr. Peerapol Yuvapoositanon
The distribution function F(x)
Monte Carlo Approximations – Introduction
Large Sample Theory EC 532 Burak Saltoğlu.
Probability & Statistics Probability Theory Mathematical Probability Models Event Relationships Distributions of Random Variables Continuous Random.
Lecture 2 – Monte Carlo method in finance
STOCHASTIC HYDROLOGY Random Processes
CHAPTER 12 STATISTICAL METHODS FOR OPTIMIZATION IN DISCRETE PROBLEMS
Chapter 14 Monte Carlo Simulation
6.3 Sampling Distributions
Presentation transcript:

P ROBABILITY T HEORY APPENDIX C P ROBABILITY T HEORY you can never know too much probability theory. If you are well grounded in probability theory, you will find it easy to integrate results from theoretical and applied statistics into the analysis of your applications.  Daniel McFadden, 2000 Nobel Prize in Economics …you can never know too much probability theory. If you are well grounded in probability theory, you will find it easy to integrate results from theoretical and applied statistics into the analysis of your applications.  Daniel McFadden, 2000 Nobel Prize in Economics Organization of appendix in ISSO –Basic properties Sample space Expected value –Convergence theory Definitions (four modes) Examples and counterexamples Dominated convergence theorem Convergence in distribution and central limit theorem Slides for Introduction to Stochastic Search and Optimization (ISSO) by J. C. Spall

C-2 Probability Theory Random variables, distribution functions, and expectations are critical –Central tools in stochastic search, optimization, and Monte Carlo methods Probabilistic convergence important in building theoretical foundation for stochastic algorithms Most theoretical results for algorithms rely on asymptotic arguments (i.e., convergence)

C-3 Expectation Let X   m, m  {1, 2,…}, be distributed according to density function p X (x) Then the expected value of a function f(X) is provided that <  Obvious analogue to above for discrete random vectors Important special cases for expected value: –Mean: f(X) = X –Covariance matrix: f(X) = [X – E(X)] [X – E(X)] T

C-4 Probabilistic Convergence Finite-sample results are usually hopeless Asymptotic (convergence) results provide means to analyze stochastic algorithms Four famous modes of convergence: –almost surely (a.s.) –in probability (pr.) –in mean-square (m.s.) –in distribution (dist.) First three modes above pertain to sense in which X k  X as k   Last mode (dist.) pertains to convergence of distribution function of X k to distribution function of X

C-5 Implications for Four Modes of Convergence pr. a.s. m.s. dist.