Presentation is loading. Please wait.

Presentation is loading. Please wait.

Natural Language Processing Vasile Rus

Similar presentations


Presentation on theme: "Natural Language Processing Vasile Rus"— Presentation transcript:

1 Natural Language Processing Vasile Rus http://www.cs.memphis.edu/~vrus/teaching/nlp

2 Outline Announcements Probabilistic CFG –Learning the parameters Dependency Grammar Parser Evaluation

3 Parsing with Probabilistic CFGs Develop a probabilistic model –Assigning probabilities to parse trees Get the probabilities for the model Parse with probabilities –Slight modification to dynamic programming approach –Task is to find the max probability tree for an input

4 Probability Model Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb.55 VP -> Verb NP.40 VP -> Verb NP NP.05

5 Probability Model (cont’d) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a derivation (tree) is just the product of the probabilities of the rules in the derivation The probability of a word sequence (sentence) –is the probability of its tree in the unambiguous case –is the sum of the probabilities of the trees in the ambiguous case

6 Getting the Probabilities From an annotated database (a treebank) –Treebanks are annotated manually –English: Penn Treebank (about 1.6 million words) –Chinese: Chinese Treebank (about 500K words) –German: German Treebank (size ??) Learned from a raw corpus

7 Learning from a Treebank The easiest and the most accurate way to learn probabilities Get a large collection of parsed sentences Collect counts for each non-terminal rule expansion in the collection Normalize Done

8 Learning What if you don’t have a treebank (and can’t get one) Take a large collection of text and parse it In the case of syntactically ambiguous sentences collect all the possible parses Prorate the rule statistics gathered for rules in the ambiguous case by their probability Proceed as you did with a treebank

9 Learning How? –If you don’t have the probabilities how can you prorate the ambiguous parses based on their probabilities Magic –Guess some random probabilities for the rules –Use those to assign probabilities to the trees

10 Learning Would that work? –If you just made up the numbers, how could they possibly help? –Well they constitute a prediction –We can re-adjust them incrementally until the predictions start to look right

11 Assumptions We’re assuming that there is a grammar to be used to parse with We’re assuming the existence of a large robust dictionary with parts of speech We’re assuming the ability to parse (i.e. a parser) Given all that… we can parse probabilistically

12 Typical Approach Bottom-up dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up

13 What’s that last bullet mean? Say we’re talking about a final part of a parse –S-> 0 NP i VP j –The probability of the S is… –P(S->NP VP)*P(NP)*P(VP) –The green part is already known, since we are doing bottom-up parsing

14 Max P(NP) is known What if there are multiple NPs for the span of text in question –(0 to i)? Take the max Does not mean that other kinds of constituents for the same span are ignored (i.e. they might be in the solution)

15 Problems The probability model discussed so far is just based on the rules in the derivation… –Doesn’t use the words in any real way –Doesn’t take into account where in the derivation a rule is used

16 Solution Add lexical dependencies to the scheme… –Infiltrate the influence of particular words into the probabilities in the derivation i.e. condition on actual words –All the words? No, only the right ones

17 Heads To do that we’re going to make use of the notion of the head of a phrase –The head of an NP is its noun –The head of a VP is its verb –The head of a PP is its preposition A way of achieving some generalization –Replace phrases with their heads

18 Heads Finding basic phrases (NP, VP, PP) and their heads: –Shallow parsing http://l2r.cs.uiuc.edu/~cogcomp –and Heuristics

19 Example (right)

20 Example (wrong)

21 How? We used to have –VP -> V NP PP P(r|VP) That’s the count of this rule divided by the number of VPs in a treebank Now we have –VP(dumped)-> V(dumped) NP(sacks)PP(in) –P(r|VP && dumped is the verb && sacks is the head of the NP && in is the head of the PP) –Not likely to have significant counts in any treebank

22 Declare Independence When stuck, exploit independence and collect the statistics you can… We’ll focus on capturing two things –Verb subcategorization Particular verbs have affinities for particular VPs –Objects’ affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others

23 Subcategorization Condition particular VP rules on their head… so – r: VP -> V NP PP P(r|VP) becomes P(r | VP && dumped) –What’s the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total

24 Preferences Subcategorization captures the affinity between VP heads (verbs) and the VP rules they go with What about the affinity between VP heads and the heads of the other daughters of the VP

25 Preferences The issue here is the attachment of the PP. So the affinities we care about are the ones between dumped and into vs. sacks and into So count the places where dumped is the head of a constituent that has a PP daughter with into as its head and normalize Vs. the situation where sacks is a constituent with into as the head of a PP daughter

26 Preferences Consider the VPs –Ate spaghetti with gusto –Ate spaghetti with marinara The affinity of gusto for eat is much larger than its affinity for spaghetti On the other hand, the affinity of marinara for spaghetti is much higher than its affinity for eat

27 Moving away from CFG To dependency grammars

28 Dependency grammars Model binary dependencies between words For instance: –I eat –eat apples Find the set of dependencies that best fits the input words (i.e. with no contradictions)

29 Dependency grammars Link Grammar (CMU) –A lexicon –A set of rules indicating the type of links a word can participate in –Example: I S+ eat S- O+ apple O- –Match these links (using a dynamic programming approach) http://bobo.link.cs.cmu.edu/link/

30 Evaluating parsers Precision –# of correct phrases/dependencies from the parse / total # of dependencies in the parse Recall –# of correct phrases/dependencies from the parse / total # of phrases/dependencies in the treebank (gold standard)

31 Evaluating Parsers Cross-brackets –# of crossing brackets in the parse and the treebank –E.g. (A (B C)) and ((A B) C) has one crossing bracket

32 Summary Probabilistic CFG –Learning the parameters Dependency Grammar Parser Evaluation

33 Next Time Unification Grammar


Download ppt "Natural Language Processing Vasile Rus"

Similar presentations


Ads by Google