Presentation is loading. Please wait.

Presentation is loading. Please wait.

Preliminaries Prof. Navneet Goyal CS & IS BITS, Pilani.

Similar presentations


Presentation on theme: "Preliminaries Prof. Navneet Goyal CS & IS BITS, Pilani."— Presentation transcript:

1 Preliminaries Prof. Navneet Goyal CS & IS BITS, Pilani

2 Topics  Probability Theory  Decision Theory  Information Theory

3 Topics  Probability Theory  Decision Theory  Information Theory

4 Probability Theory  Key concept is dealing with uncertainty – Due to noise and finite data sets  Probability Densities  Bayesian Probabilities  Gaussian (normal) Distribution  Curve Fitting revisited  Bayesian Curve Fitting  Maximum Likelihood Estimation

5 Probability Theory  Frequentist or Classical Approach  Population parameters are fixed constants whose values are unknown  Experiments are repeated indefinitely large no. of times  Toss a fair coin 10 times, it may not be unusual to observe 80% heads  Toss a coin 10 trillion times, we can be fairly certain that the proportion of heads will be close to 50%  Long run behavior defines probability!

6 Probability Theory  Frequentist or Classical Approach  What is the probability that terrorist will strike an Indian city using AK-47?  Difficult to conceive the long-run behavior  In frequentist approach, the parameters are fixed, and the randomness lies in the data  Data is viewed as a random sample from a given distribution with unknown but fixed parameters

7 Probability Theory  Bayesian Approach  Turn the assumptions around  Parameters are considered to be random variables  Data are considered to be known  Parameters come from a distribution of possible values  Bayesians look to the observed data to provide information on likely parameter values  Let θ represent the parameters of the unknown distribution  Bayesian approach requires elicitation of a prior distribution for θ, called the prior distribution p( θ )

8 Probability Theory  Bayesian Approach  p( θ ) can model extant expert (domain) knowledge, if any, regarding the distribution of θ  For example: Churn modeling experts in Telcos may be aware that a customer exceeding a certain threshold no. of calls to customer service may indicate a likelihood to churn  Combine this with prior information about the distribution of customer service calls, including its mean & std. deviation  Non-informative prior – assigns equal probabilities to all values of the parameter  Prior prob. of both churners & non-churners = 0.5 (Telco in question is doomed!!)

9 Probability Theory  Bayesian Approach  Prior distribution is generally dominated by the overwhelming amount of information that is found in the data  p( θ|X ) – posterior probability, where X represents the entire array of data  Updating of the knowledge about was first performed by Reverend Thomas Bayes (1702-1761)

10 Probability Theory Apples and Oranges

11 Probability Theory Marginal Probability Conditional Probability Joint Probability

12 Probability Theory Sum Rule Product Rule

13 The Rules of Probability Sum Rule Product Rule

14 Bayes’ Theorem posterior  likelihood × prior Bayes theorem plays a central role in ML!

15 Joint Distribution over 2 variables

16 Probability Densities If the probability of a real valued variable x falling in the interval (x, x+ δx) is given by p(x) δx for δx  0, then p(x) is called the prob. density over x

17 The Gaussian Distribution

18 Decision Theory  Probability theory provides us with a consistent mathematical framework for quantifying and manipulating uncertainty  Decision theory + Prob. Theory enable us to make optimal decisions in uncertain situations  Input vector x, target variable t  Joint Prob. Dist. p(x,t) provides a complete summary of uncertainty associated with variables x & t  Determination of p(x,t) from a set of training data is an example of inference – a very difficult problem  In practical applications, we make a specific prediction for the value of t & take a specific action based on our understanding of the values t is likely to take  This is Decision theory

19

20 Decision Theory  Decision stage is generally very simple, even trivial, once we have solved the inference problem  Role of probabilities in decision making  When we receive an X-ray image of a patient, we need to decide its class  We are interested in the probabilities of the two classes given the image  Use Baye’s th.

21 Decision Theory Errors

22 Decision Theory  Optimal Decision Boundary??  Equivalent to minimum misclassification rate decision rule: assign each value of x to the class having the higher posterior probability p(C k |x)

23 Decision Theory  Minimizing expected loss  Simply minimizing the no. of misclassifications does not suffice in all cases  For example: spam mail filtering, IDS, disease diagnosis etc.  Attach a very high cost to the type of misclassification you want to minimize/eliminate

24 Information Theory  How much information is received when we observe a specific value for a discrete random variable x?  Amount of information is degree of surprise  Certain means no information  More information when event is unlikely  Entropy:  a measure of disorder/unpredictability or a measure of surprise  Tossing a coin  Fair coin – maximum entropy as there is no way to predict the outcome of the next toss  Biased coin – less entropy as uncertainty is lower and we can bet preferentially on the most frequent result  Two-headed coin – zero entropy as the coin will always turn up heads  Most collections of data in the real world lie somewhere in between

25 Information Theory  How to measure Entropy?  Information content depends upon probability distribution of x  We look for a function h(x) that is a monotonic function of the of the probability p(x)  If two events x & y are unrelated, then h(x,y) = h(x) + h(y)  Two unrelated events will be statistically independent p(x,y) = p(x)p(y)  h(x) must be log of p(x) h(x) = -log 2 p(x) -ve sign ensures that information is +ve or zero

26 Information Theory  h(x) = -log 2 p(x) -ve sign ensures that information is +ve or zero  Choice of basis for log is arbitrary  IT theory uses base 2  Units of h(x) are ‘bits’  A sender wishes to transmit the value of a rv to a receiver  Avg. amt. of info. that they transmit is obtained by taking the expectation wrt the distribution p(x)

27 Entropy Important quantity in coding theory statistical physics machine learning (classification using decision trees)

28 Entropy  Coding theory: x discrete with 8 possible states; how many bits to transmit the state of x ?  All states equally likely  That is, we need to transmit a msg of length 3 bits  RV x having 8 possible states (a,b,...,h) and respective probabilities are given by (1/2,1/4,1/8,1/16,1/64,1/64,1/64,1/64)

29 Entropy Non-uniform distribution has a smaller entropy than the uniform one!! Has an interpretation of in terms of disorder! Use shorter codes for more probable events and longer codes for less probable events in the hope of getting a shorter avg code length

30 Entropy  Noiseless coding theorem of Shannon  Entropy is a lower bound on number of bits needed to transmit a random variable  Natural logarithms are used in relationship to other topics  Nats instead of bits

31 Linear Basis Function Models  Polynomial basis functions:  These are global; a small change in x affect all basis functions.

32 Linear Basis Function Models (4)  Gaussian basis functions:  These are local; a small change in x only affect nearby basis functions. μ j and s control location and scale (width).

33 Linear Basis Function Models (5)  Sigmoidal basis functions:  where  Also these are local; a small change in x only affect nearby basis functions. ¹ j and s control location and scale (slope).

34 Home Work  Read about Gaussian, Sigmoidal, & Fourier basis functions  Sequential Learning & Online algorithms  Will discuss in the next class!

35 The Bias-Variance Decomposition  Bias-variance decomposition is a formal method for analyzing the prediction error of a predictive model  Bias = avg. distance bet the target and the location where the projectile hits the ground (depends on angle)  Variance = deviation bet x and the avg. position where the projectile hits the floor (depends on force)  Noise: if the target is not stationary then the observed distance is also affected by changes in the location of target

36 The Bias-Variance Decomposition  Low degree polynomial has high bias (fits poorly) but has low variance with different data sets  High degree polynomial has low bias (fits well) but has high variance with different data sets  Interactive demo @: http://www.aiaccess.net/English/Glossaries/Glos Mod/e_gm_bias_variance.htm

37 The Bias-Variance Decomposition  True height of Chinese emperor: 200cm, about 6’6”. Poll a random American: ask “How tall is the emperor?”  We want to determine how wrong they are, on average

38 The Bias-Variance Decomposition  Each scenario has expected value of 180 (or bias error = 20), but increasing variance in estimate Squared error = Square of bias error + Variance As variance increases, error increases


Download ppt "Preliminaries Prof. Navneet Goyal CS & IS BITS, Pilani."

Similar presentations


Ads by Google