Presentation is loading. Please wait.

Presentation is loading. Please wait.

7 - 1 Chapter 7 Mathematical Foundations. 7 - 2 Notions of Probability Theory Probability theory deals with predicting how likely it is that something.

Similar presentations


Presentation on theme: "7 - 1 Chapter 7 Mathematical Foundations. 7 - 2 Notions of Probability Theory Probability theory deals with predicting how likely it is that something."— Presentation transcript:

1 7 - 1 Chapter 7 Mathematical Foundations

2 7 - 2 Notions of Probability Theory Probability theory deals with predicting how likely it is that something will happen. The process by which an observation is made is called an experiment or a trial. The collection of basic outcomes (or sample points) for our experiment is called the sample space  (Omega). An event is a subset of the sample space. Probabilities are numbers between 0 and 1, where 0 indicates impossibility and 1 certainty. A probability function/distribution distributes a probability mass of 1 throughout the sample space.

3 7 - 3 Example A fair coin is tossed 3 times. What is the chance of 2 heads? –  ={HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} –uniform distribution P(basic outcome)= The chance of getting 2 heads when tossing 3 coins A={HHT, HTH, THH} P(A)= the event of interest an experiment sample space a subset of  probabilistic function

4 7 - 4 Conditional Probability Conditional probabilities measure the probability of events given some knowledge. Prior probabilities measure the probabilities of events before we consider our additional knowledge. Posterior probabilities are probabilities that result from using our additional knowledge. 1 st 2 nd 3 rd HHT TH Event B: 1 st is H Event A: 2 Hs in 1 st, 2 nd and 3 rd P(B)= P(A  B)=  P(A|B)=

5 7 - 5 A B The multiplication rule The chain rule (used in Markov models, …)

6 7 - 6 Independence The chain rule relates intersection with conditionalization (important to NLP) Independence and conditional independence of events are two very important notions in statistics –independence: – conditional independence

7 7 - 7 Bayes’ Theorem Bayes’ Theorem lets us swap the order of dependence between events. This is important when the former quantity is difficult to determine. P(A) is a normalizing constant.

8 7 - 8 Pick the best conclusion given some evidence –(1) evaluate the probability P(c|e) <--- unknown –(2) Select c with the largest P(c|e). P( c ) and P(e|c) are known. Example. –Relative probability of a disease –how often a symptom is associated An Application

9 7 - 9 Bayes’ Theorem A B

10 7 - 10 Bayes’ Theorem if, P(A) > 0, (i  j) (used in noisy channel model) The group of sets B i partition A

11 7 - 11 An Example A parasitic gap occurs once in 100,000 sentences. A complicated pattern matcher attempts to identify sentences with parasitic gaps. The answer is positive with probability 0.95 when a sentence has a parasitic gap, and the answer is positive with probability 0.005 when it has no parasitic gap. When the test says that a sentence contains a parasitic gaps, what is the probability that this is true? P(G)=0.00001,, P(T|G)=0.95,

12 7 - 12 Random Variables A random variable is a function X:  (sample space)  R n –Talk about the probabilities that are related to the event space A discrete random variable is a function X:  (sample space)  S where S is a countable subset of R. If X:  (sample space)  {0,1}, then X is called a Bernoulli trial. The probability mass function (pmf) for a random variable X gives the probability that the random variable has different numeric values. where pmf

13 7 - 13 Events: toss two dice and sum their faces  ={(1,1), (1,2), …, (1,6), (2,1), (2,2), …, (6,1), (6,2), …, (6,6)} S={2, 3, 4, …, 12} X:   S pmf p(3)=p(X=3)=P(A 3 )=P({(1,2),(2,1)})=2/36 where A 3 ={  : X(  )=3}={(1,2),(2,1)}

14 7 - 14 Expectation The expectation is the mean ( , Mu ) or average of a random variable. Example –Roll one die and Y is the value on its face E(aY+b)=aE(Y)+b E(X+Y)=E(X)+E(Y)

15 7 - 15 Variance The variance (  2, Sigma 2 ) of a random variable is a measure of whether the values of the random variable tend to be consistent over trials or to vary a lot. standard deviation (  ) –square root of the variance

16 7 - 16 X: toss two dice, and sum of their faces, I.e., X=Y+Y Y: toss one die, and its value X: a random variable that is the sum of the numbers on two dice

17 7 - 17 Joint and Conditional Distributions More than one random variable can be defined over a sample space. In this case, we talk about a joint or multivariate probability distribution. The joint probability mass function for two discrete random variables X and Y is: p(x, y)=P(X=x, Y=y) The marginal probability mass function totals up the probability masses for the values of each variable separately. If X and Y are independent, then

18 7 - 18 Joint and Conditional Distributions Similar intersection rules hold for joint distributions as for events. Chain rule in terms of random variables

19 7 - 19 Estimating Probability Functions What is the probability that sentence “The cow chewed its cud” will be uttered? Unknown  P must be estimated from a sample of data. An important measure for estimating P is the relative frequency of the outcome, i.e., the proportion of times a certain outcome occurs. Assuming that certain aspects of language can be modeled by one of the well-known distribution is called using a parametric approach. If no such assumption can be made, we must use a non-parametric approach or distribution-free approach.

20 7 - 20 parametric approach Select an explicit probabilistic model Specify a few parameters to determine a particular probability distribution The amount of training data required is not great, and can be calculated to make good probability estimates

21 7 - 21 Standard Distributions In practice, one commonly finds the same basic form of a probability mass function, but with different constants employed. Families of pmfs are called distributions and the constants that define the different possible pmfs in one family are called parameters. Discrete Distributions: the binomial distribution, the multinomial distribution, the Poisson distribution. Continuous Distributions: the normal distribution, the standard normal distribution.

22 7 - 22 Binomial distribution A series of trials with only two outcomes, each trial is independent from all others The number r of successes out of n trials given that the probability of success in any trial is p: expectation: npvariance: np(1-p) parametersvariable

23 7 - 23

24 7 - 24 The normal distribution two parameters for mean  and standard deviation , the curve is given by standard normal distribution –  =0,  =1

25 7 - 25

26 7 - 26 Baysian Statistics I: Bayesian Updating frequentist statistics vs. Bayesian statistics –Toss a coin 10 times and get 8 heads  8/10 (maximum likelihood estimate) –8 heads out of 10 just happens sometimes given a small sample Assume that the data are coming in sequentially and are independent. Given an a priori probability distribution, we can update our beliefs when a new datum comes in by calculating the Maximum A Posteriori (MAP) distribution. The MAP probability becomes the new prior and the process repeats on each new datum.

27 7 - 27  m : the model that asserts P(head)=m s: a particular sequence of observations yielding i heads and j tails Find the MLE (maximum likelihood estimate) by differentiating the above polynomial. Frequentist Statistics 8 heads and 2 tails: 8/(8+2)=0.8

28 7 - 28 a priori probabilistic distribution: belief in the fairness of a coin: a regular, fair one a particular sequence of observations: i heads and j tails New belief in the fairness of a coin?? (when i=8,j=2) new priori Bayesian Statistics

29 7 - 29 Bayesian Statistics II: Bayesian Decision Theory Bayesian Statistics can be used to evaluate which model or family of models better explains some data. We define two different models of the event and calculate the likelihood ratio between these two models.

30 7 - 30 Entropy The entropy is the average uncertainty of a single random variable. Let p(x)=P(X=x); where x  X. H(p)= H(X)= -  x  X p(x)log 2 p(x) In other words, entropy measures the amount of information in a random variable. It is normally measured in bits. Toss two coins and count the number of heads p(0)=1/4, p(1)=1/2, p(2)=1/4

31 7 - 31 Example Roll an 8-sided die Entropy (another view) –the average length of the message needed to transmit an outcome of that variable 12345678 001010011100101110111000 –Optimal code to send a message of probability p(i):

32 7 - 32 Example Problem: send a friend a message that is a number from 0 to 3. How long a message must you send ? (in terms of number of bits) Example: watch a house with two occupants. Case Situation Probability1 Probability2 0 no occupants 0.25 0.5 1 1st occupant 0.25 0.125 2 2nd occupant 0.25 0.125 3 both occupant 0.25 0.25 Probability Code Probability2 Code 0 0.25 00 0.5 0 1 0.25 01 0.125 110 2 0.25 10 0.125 111 3 0.25 11 0.25 10

33 7 - 33 Variable-length encoding Code tree: –(1) all messages are handled. –(2) when one message ends and the next starts. –Fewer bits for more frequent messages more bits for less frequent messages. 0-No occupants 10-both occupants 110-First occupant111-Second occupant

34 7 - 34 W : random variable for a message V(W) : possible messages P : probability distribution lower bound on the number of bits needed to encode such a message : (entropy of a random variable)

35 7 - 35 (1) entropy of a message: –lower bound for the average number of bits needed to transmit that message. (2) encoding method: – using  -log P(w)  bits entropy (another view) –a measure of the uncertainty about what a message says. –fewer bits for more certain message more bits for less certain message

36 7 - 36

37 7 - 37 Simplified Polynesian ptkaiu 1/81/41/81/41/81/8 per-letter entropy ptkaiu 1000010101110111 Fewer bits are used to send more frequent letters.

38 7 - 38 Joint Entropy and Conditional Entropy The joint entropy of a pair of discrete random variables X, Y ~ p(x,y) is the amount of information needed on average to specify both their values. The conditional entropy of a discrete random variable Y given another X, for X, Y ~ p(x,y), expresses how much extra information you still need to supply on average to communicate Y given that the other party knows X.

39 7 - 39 Chain rule for entropy Proof:

40 7 - 40 Simplified Polynesian revised Distinction between model and reality Simplified Polynesian has syllable structure All words are consist of sequences of CV (consonant-vowel) syllables. A better model in terms of two random variables C and V.

41 7 - 41 consonant vowelvowel P(C,) P(,V) P(V,C)

42 7 - 42

43 7 - 43

44 7 - 44 Short Summary Better understanding means much less uncertainty (2.44bits < 5 bits) Incorrect model’s cross entropy is larger than that of the correct model correct model approximate model

45 7 - 45 Entropy rate: per-letter/word entropy The amount of information contained in a message depends on the length of the message The entropy of a human language L

46 7 - 46 Mutual Information By the chain rule for entropy, we have H(X,Y) = H(X)+ H(Y|X) = H(Y)+H(X|Y) Therefore, H(X)-H(X|Y)=H(Y)-H(Y|X) This difference is called the mutual information between X and Y. It is the reduction in uncertainty of one random variable due to knowing about another, or, in other words, the amount of information one random variable contains about another.

47 7 - 47 H(X|Y)H(Y|X) H(X,Y) I(X;Y) H(Y)H(X)

48 7 - 48 It is 0 only when two variables are independent, but for two dependent variables, mutual information grows not only with the degree of dependence, but also according to the entropy of the variables Conditional mutual information Chain rule for mutual information

49 7 - 49 Pointwise Mutual Information Applications: clustering words word sense disambiguation

50 7 - 50 Clustering by Next Word (Brown, et al., 1992) 1.Each word was characterized by the word that immediately followed it. c(w i ) ==  w j total … w i w j … in the corpus 2.Define the distance measure on such vectors. mutual information I(x, y) the amount of information one outcome gives us about the other I(x, y) == (-log P(x)) - (-log P(x|y)) == log def x uncertainty x gives y uncertainty certainty (x gives y)

51 7 - 51 Example. How much information the word “pancake” gives us about the following word “syrup”

52 7 - 52 Physical meaning of MI (1) w i and w j have no particular relation to each other. I(x; y)  0 P(w j | w i ) = P(w j ) x 與 y 相互之間不會給對方信息

53 7 - 53 Physical meaning of MI (2) w i and w j are perfectly coordinated. a very large number I(x; y) >> 0 當知道 y 後,提供很多有關 x 的信息

54 7 - 54 Physical meaning of MI (3) w i and w j are negatively correlated a very small negative number I(x; y) << 0 王不見王

55 7 - 55 The Noisy Channel Model Assuming that you want to communicate messages over a channel of restricted capacity, optimize (in terms of throughput and accuracy) the communication in the presence of noise in the channel. A channel’s capacity can be reached by designing an input code that maximizes the mutual information between the input and output over all possible input distributions. Encoder W Message from a finite alphabet Channel p(y|x) X Input to channel Decoder Y Output from channel Attempt to reconstruct message based on output noisynoisy channelchannel

56 7 - 56 0 1-p 1 1 0 p A binary symmetric channel. A 1 or 0 in the input gets flipped on transmission with probability p. I(X;Y)= H(Y) – H(Y|X) = H(Y) – H(p) The channel capacity is 1 bit only if the entropy H(p) is 0. I.E., If p=0 the channel reliability transmits a 0 as 0 and 1 as 1. If p=1 it always flips bits The channel capacity is 0 when both 0s and 1s are transmitted with equal probability as 0s and 1s (i.e., p=1/2).  Completely noisy binary channel H(p)=-  p(x)log 2 p(x)

57 7 - 57 p(i): language model p(o|i): channel probability Noisy Channel p(o|i) I Decoder O The noisy channel model in linguistics

58 7 - 58 Speech Recognition Find the sequence of words that maximizes Maximize P( ) P(Speech Signal | ) | Speech Signal ) Speech Signal ) Speech Signal | ) language modelacoustic aspects of speech signal Speech Signal)

59 7 - 59 The dog... big pig Assume P(big | the) = P(pig | the). P(the big dog) = P(the)P(big | the)P(dog | the big) P(the pig dog) = P(the)P(pig | the)P(dog | the pig)  P(dog | the big) > P(dog | the pig)  “the big dog” is selected. => “dog” selects “big”

60 7 - 60

61 7 - 61 Relative Entropy or Kullback-Leibler Divergence For 2 pmfs, p(x) and q(x), their relative entropy is: The relative entropy (also known as the Kullback- Leibler divergence) is a measure of how different two probability distributions (over the same event space) are. The KL divergence between p and q can also be seen as the average number of bits that are wasted by encoding events from a distribution p with a code based on a not-quite-right distribution q. i.e., no bits are wasted when p=q

62 7 - 62 Application: measure selectional preferences in selection A measure of how far a joint distribution is from independence: Conditional relative entropy: Chain rule for relative entropy: Recall

63 7 - 63 The Relation to Language: Cross-Entropy Entropy can be thought of as a matter of how surprised we will be to see the next word given previous words we already saw. The cross entropy between a random variable X with true probability distribution p(x) and another pmf q (normally a model of p) is given by: Cross-entropy can help us find out what our average surprise for the next word is.

64 7 - 64 Cross Entropy Cross entropy of a language L=(X i )~p(x) The language is “nice” Large body of utterances available

65 7 - 65 How much good does the approximate model do ? correct model approximate model

66 7 - 66 proof: ….(1) ….(2) (correct model)(approximate model) … (1) … (2) y=x-1 e

67 7 - 67 cross entropy of a language L give a model M (“all” English test) infinite representative samples of English text

68 7 - 68

69 7 - 69 Cross Entropy as a Model Evaluator Example: –Find the best model to produce message of 20 words. Correct probabilistic model. Message Probability M1 0.05 M5 0.10 M2 0.05 M6 0.20 M3 0.05 M7 0.20 M4 0.10 M8 0.25

70 7 - 70 Per-word cross entropy Approximate model * (0.05 log P(M1) + 0.05 log P(M2)+ 0.05 log P(M3) + 0.10 log P(M4)+ 0.10 log P(M5) + 0.20 log P(M6)+ 0.20 log P(M7) + 0.25 log P(M8) ) 100 samples M1 M1 M1 M1 M1 (5) M2 M2 M2 M2 M2 (5) M3 M3 M3 M3 M3 (5) M4 M4 M4... M4 (10) M5 M5 M5... M5 (10) M6 M6 M6... M6 (20) M7 M7 M7... M7 (20) M8 M8 M8... M8 (25) Each message is independent of the next

71 7 - 71 = M1 : M2 :. M8 : P(M1) * P(M1) * … * P(M1) 5 times P(M2) * P(M2) * … * P(M2) 5 times P(M8) * P(M8) * … * P(M8) 25 times *... Log( ) = 5 log P(M1) + 5 log P(M2) + 5 log P(M3)+ 10 log P(M4) + 10 log P(M5) + 20 log P(M6)+ 20 log P(M7) + 25 log P(M8)

72 7 - 72 per-word cross entropy the test suite of 100 examples was exactly indicative of the probabilistic model. To measure the cross entropy of a model works only if the test sequence has not been used by model builder. closed test (inside test) open test (outside test)

73 7 - 73 The Composition of Brown and LOB Corpora Brown Corpus: –Brown University Standard Corpus of Present-day American English. LOB Corpus: –Lancaster/Oslo-Bergen Corpus of British English.

74 7 - 74 Text Category Number of Texts Brown LOB A Press:reportage 報導文學 44 44 B Press:editorial 社論 27 27 C Press:reviews 書評 17 17 D Religion 宗教性 17 17 E Skills and hobbies 技藝,商業性,娛樂性 36 38 F Popular lore 名間傳奇 48 44 G Belles lettres,biography,memoirs,etc 純文學 75 77 H Miscellaneous(mainly government documents) 雜類 ( 包括政府、出版品、基金會報告等 ) 30 30 J Learned(including suence and technology) 學術性和科學論文 80 80 K General fiction 一般小說 29 29 L Mystery and detective 神秘及偵探小說 24 24 M Suence fiction 科幻小說 6 6 N Adverture and western fiction 探險及西部小說 29 29 P Romancer and love story 浪漫愛情小說 29 29 R Humour 幽默體 9 9

75 7 - 75 The Entropy of English We can model English using n-gram models (also known a Markov chains). These models assume limited memory, i.e., we assume that the next word depends only on the previous k ones [kth order Markov approximation].

76 7 - 76 P(w 1,n ) bigram: trigram:

77 7 - 77 The Entropy of English What is the Entropy of English?

78 7 - 78 Perplexity A measure related to the notion of cross-entropy and used in the speech recognition community is called the perplexity. A perplexity of k means that you are as surprised on average as you would have been if you had had to guess between k equiprobable choices at each step.


Download ppt "7 - 1 Chapter 7 Mathematical Foundations. 7 - 2 Notions of Probability Theory Probability theory deals with predicting how likely it is that something."

Similar presentations


Ads by Google