Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical methods in NLP Course 2 Diana Trandab ă ț

Similar presentations


Presentation on theme: "Statistical methods in NLP Course 2 Diana Trandab ă ț"— Presentation transcript:

1 Statistical methods in NLP Course 2 Diana Trandab ă ț 2015-2016

2 Quick Recap Out of three prisoners, one, randomly selected, without their knowldge, will be executed, and the other two will be released. One of the prisoners asks the guard to show him which of the other two will be released (at least one will be released anyway). If the quard answers, will the prisoner have more information than before?

3 Quick Recap

4 Essential Information Theory Developed by Shannon in the 40s Maximizing the amount of information that can be transmitted over an imperfect communication channel Data compression (entropy) Transmission rate (channel capacity)

5 Probability mass function The probability that a random variable X has differen numeric values p(x) = P(X=x) =P(A x ) where A x ={    : X(  ) = x} Example: The probabiliy of heads when flipping 2 coins p(0) = ¼ p(1) = ½ p(2) = ¼

6 Probability mass function The probability that a random variable X has differen numeric values p(x) = P(X=x) =P(A x ) where A x ={    : X(  ) = x} Example: The probabiliy of heads when flipping 2 coins p(nr_heads=0) = ¼ p(nr_heads= 1) = ½ p(nr_heads= 2) = ¼

7 Probability mass function The probability that a random variable X has differen numeric values p(x) = P(X=x) =P(A x ) where A x ={    : X(  ) = x} Example: The probabiliy of heads when flipping 2 coins p(nr_heads=0) = ¼ p(nr_heads= 1) = ½ p(nr_heads= 2) = ¼

8 Expectation The expectation is the mean or average of a random variable Example: Expectation of rolling one die and Y being the value of its face is: E(X+Y) = E(X)+E(Y) E(XY) = E(X)E(Y) if X and Y are independent

9 Variance The variance of a random variable is a measure of whether the values of the variable tend to be consistent over trials or to vary a lot. Var(X) = E((X-E(X)) 2 ) = E(X 2 ) – E 2 (X) The commonly used standard deviation σ is the square root of variance.

10 Entropy X: discrete random variable; p(x) = probability mass function of the random variable X Entropy (or self-information) Entropy measures the amount of information in a random variable It is the average length of the message needed to transmit an outcome of that variable using the optimal code (in bits)

11 Entropy (cont) i.e when the value of X is determinate, hence providing no new information

12 Exercise Compute the Entropy of tossing a coin

13 Exercise

14 Exercise 2 Example: Entropy of rolling a 8-sided die.

15 Exercise 2 Example: Entropy of rolling a 8-sided die. 1 2 3 4 5 6 7 8 001 010 011 100 101 110 111 000

16 Exercise 3 Entropy of biased die P(X=1)=1/2 P(X=2)=1/4 P(X=3)=0 P(X=4)=0 P(X=5)=1/8 P(X=6)=1/8

17 Exercise 3 Entropy of biased die P(X=1)=1/2 P(X=2)=1/4 P(X=3)=0 P(X=4)=0 P(X=5)=1/8 P(X=6)=1/8

18 Symplified Polynesian – letter frequencies – per-letter entropy – coding ptkaiu 1000010101110111 ptkaiu 1/81/41/81/41/8

19 Symplified Polynesian – letter frequencies – per-letter entropy – coding ptkaiu 1000010101110111 ptkaiu 1/81/41/81/41/8

20 Joint Entropy The joint entropy of 2 random variables X,Y is the amount of the information needed on average to specify both their values

21 Conditional Entropy The conditional entropy of a random variable Y given another X, expresses how much extra information one still needs to supply on average to communicate Y given that the other party knows X

22 Chain Rule

23 Simplified Polynesian Revisited – syllable structure all words consist of sequences of CV syllables. C: consonant, V: vowel

24 More on entropy Entropy Rate(per-word/per-letter entropy) Entropy of a Language

25 Mutual Information I(X,Y) is the mutual information between X and Y. It is the measure of dependence between two random variables, or the amount of information one random variable contains about the other

26 Mutual Information (cont) I is 0 only when X,Y are independent: H(X|Y)=H(X) H(X)=H(X)-H(X|X)=I(X,X) Entropy is the self-information

27 More on Mutual Information Conditional Mutual Information Chain Rule Pointwise Mutual Information

28 Exercise 4 Let p(x; y) be given by Find: (a) H(X), H(Y ) (b) H(X|Y ), H(Y|X) (c) H(X,Y) (d) I(X,Y) X | Y01 01/3 10

29 Entropy and Linguistics Entropy is measure of uncertainty. The more we know about something the lower the entropy. If a language model captures more of the structure of the language, then the entropy should be lower. We can use entropy as a measure of the quality of our models

30 Entropy and Linguistic Measure of how different two probability distributions are Average number of bits that are wasted by encoding events from a distribution p with a code based on a not-quite right distribution q Noisy channel! = > next class!!!

31 Great! See you next time!


Download ppt "Statistical methods in NLP Course 2 Diana Trandab ă ț"

Similar presentations


Ads by Google