Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grammars August 31, 2005 11/16/2018.

Similar presentations


Presentation on theme: "Grammars August 31, 2005 11/16/2018."— Presentation transcript:

1 Grammars August 31, 2005 11/16/2018

2 CFGs: a summary CFGs appear to be just about what we need to account for a lot of basic syntactic structure in English. But there are problems That can be dealt with adequately, although not elegantly, by staying within the CFG framework. There are simpler, more elegant, solutions that take us out of the CFG framework (beyond its formal power) Syntactic theories: HPSG, LFG, CCG, Minimalism, etc 11/16/2018

3 Parsing Parsing with a CFG is the task of assigning a correct parse tree (or derivation) to a string given some grammar. The correct means that it is consistent with the input and grammar It doesn’t mean that it’s the “right” tree in global sense of correctness. The leaves of the parse tree cover all and only the input, and that parse tree corresponds to a valid derivation according to the grammar. The parsing can be viewed as a search. The search space corresponds to the space of parse trees generated by the grammar. The search is guided by the structure of space and by the input. First, we will look at basic (bad) methods of the parsing. After seeing what’s wrong with them, we will look at better methods. 11/16/2018

4 A Simple English Grammar
S  NP VP Det  that | this | a | the S  Aux NP VP Noun  book | flight | meal | money S  VP Verb  book | include | prefer NP  Det NOM Aux  does NP  ProperNoun Prep  from | to | on NOM  Noun ProperNoun  Houston | TWA NOM  Noun NOM NOM  NOM PP VP  Verb VP  Verb NP PP  Prep NOM 11/16/2018

5 Basic Top-Down Parsing
A top-down parser searches a parse tree by trying to build from the root node S (start symbol) down to leaves. First, we create the root node, then we create its children. We chose one of its children and then we create its children. We can search the search space of the parse trees: breadth first search -- level by level search depth first search -- first we search one of the children 11/16/2018

6 Top Down Space 11/16/2018

7 Basic Bottom-Up Parsing
In bottom-up parsing, the parser starts with the words of input, tries to build parse trees from words up. The parser is successful if the parser succeeds building a parse tree rooted in the start symbol that covers all of the input. 11/16/2018

8 Bottom-Up Space 11/16/2018

9 Top-Down or Bottom-Up? Each of top-down and bottom-up parsing techniques has its own advantages and disadvantages. The top-down strategy never wastes time exploring trees cannot result in the start symbol (starts from there). On the other hand, bottom-up strategy may waste time in those kind of trees. But the top-down strategy spends with trees which are not consistent with the input. On the other hand, bottom-up strategy never suggests trees that are not at least locally grounded in the actual input. None of these two basic strategies are good enough to be used in the parsing of natural languages. 11/16/2018

10 Problems with Basic Top-Down Parser
Even the top-down parser with bottom-up filtering has three problems that make it an insufficient solution to general-purpose parsing problem. Left-Recursion Ambiguity Inefficient Reparsing of Subtrees First we will talk about these three problems. Then we will present Early algorithm to avoid these problems. 11/16/2018

11 Left-Recursion When left-recursive grammars are used, top-down depth-first left-to-right parsers can dive into an infinite path. A grammar is left-recursive if it contains at least one non-terminal A such that: A * A This kind of structures are common in natural language grammars. NP  NP PP We can convert a left-recursive grammar into an equivalent grammar which is not left-recursive. A  A |  ==> A  A’ A’  A’ |  Unfortunately, the resulting grammar may no longer be the most grammatically natural way to represent syntactic structures. 11/16/2018

12 Left-Recursion What happens in the following situation S -> NP VP
S -> Aux NP VP NP -> NP PP NP -> Det Nominal With the sentence starting with Did the flight… 11/16/2018

13 Ambiguity One morning I shot an elephant in my pyjamas.
How he got into my pajamas I don’t know. (Groucho Marx) 11/16/2018

14 Ambiguity Top-down parser is not efficient at handling ambiguity.
Local ambiguity lead to hypotheses that are locally reasonable but eventually lead nowhere. They lead to backtracking. Global ambiguity potentially leads to multiple parses for the same input (if we force it to do). The parsers without disambiguation tools must simply return all possible parses. But most of disambiguation tools require statistical and semantic knowledge. There will be many unreasonable parses. But most of applications do not want all possible parses, they want a single correct parse. The reason for many unreasonable parses, exponential number of parses are possible for certain inputs. 11/16/2018

15 Ambiguity - Example If we add the following rules to our grammar:
VP  VP PP NP  NP PP The following input: Show me the meal on flight 286 from Ankara to Istanbul. will have a lot of parses (14 parses?). Some of them are really strange parses. If we have PP  Prep NP Number of NP parses Number of PPs 2 2 5 3 14 4 11/16/2018

16 Lots of ambiguity Church and Patil (1982)
Number of parses for such sentences grows at rate of number of parenthesizations of arithmetic expressions Which grow with Catalan numbers PPs Parses 1 2 2 5 3 14 4 132 5 469 11/16/2018

17 Avoiding Repeated Work
Parsing is hard, and slow. It’s wasteful to redo stuff over and over and over. Consider an attempt to top-down parse the following as an NP A flight from Indanapolis to Houston on TWA Grammar Rules: NP  Det NOM NP  NP PP NP  ProperNoun 11/16/2018

18 flight 11/16/2018

19 flight flight 11/16/2018

20 11/16/2018

21 11/16/2018

22 Repeated Parsing of Subtrees
The parser often builds valid trees for portion of the input, then discards them during backtracking, only to find that it has to rebuild them again. The parser creates small parse trees that fail because they do not cover all the input. The parser backtracks to cover more input, and recreates subtrees again and again. The same thing is repeated more than once unnecessarily. 11/16/2018

23 Dynamic Programming Does not do repeated work
We want a parsing algorithm (using dynamic programming technique) that fills a table with solutions to subproblems that: Does not do repeated work Does top-down search with bottom-up filtering Solves the left-recursion problem Solves an exponential problem in O(N3) time. The answer is Earley Algorithm. 11/16/2018

24 Earley Algorithm Fills a table in a single pass over the input.
The table will be size N+1 (N is the number of words) Table entries represent Completed constituents and their locations In-progress constituents Predicted constituents Each possible subtree is represented only once, and it can be shared by all the parses that need it. 11/16/2018

25 States a subtree corresponding to a single grammar rule
A state in a table entry contains three kinds of information: a subtree corresponding to a single grammar rule information about the progress made in completing this subtree the position of subtree with respect to to the input. We use a dot in the state’s grammar rule to indicate the progress made in recognizing it. We call this resulting structure dotted rule. A state’s position are represented by by two numbers indicating that where the state starts and where its dot lies. 11/16/2018

26 Earley States The table-entries are called states and are represented with dotted-rules. S -> · VP A VP is predicted NP -> Det · Nominal An NP is in progress VP -> V NP · A VP has been found 11/16/2018

27 Earley States/Locations
We need to know where these things are in the input: S -> · VP [0,0] A VP is predicted at the start of the sentence NP -> Det · Nominal [1,2] An NP is in progress; the Det goes from 1 to 2 VP -> V NP · [0,3] A VP has been found starting at 0 and ending at 3 11/16/2018

28 States - Dotted Rule S   VP, [0,0] NP  Det  NOM, [1,2]
Three example states: (Ex: Book that flight) S   VP, [0,0] NP  Det  NOM, [1,2] VP  Verb NP , [0,3] The first state represents a top-down prediction for S. The first 0 indicates that the constituent predicted by this state should begin at position 0 (beginning of the input). The second 0 indicates that the dot lies at position 0. The second state represents an in-progress constituent The constituent starts at position 1 and the dot lies at position 2. The third state represents a completed constituent. This state describes that VP is successfully parsed, and that constituent covers the input from position 0 to position 3. 11/16/2018

29 Graphically 11/16/2018

30 Earley Algorithm March through chart left-to-right.
At each step, apply 1 of 3 operators Predictor Create new states representing top-down expectations Scanner Match word predictions (rule with word after dot) to words Completer When a state is complete, see what rules were looking for that completed constituent 11/16/2018

31 Predictor Given a state With a non-terminal to right of dot
That is not a part-of-speech category Create a new state for each expansion of the non-terminal Place these new states into same chart entry as generated state, beginning and ending where generating state ends. So predictor looking at S -> . VP [0,0] results in VP -> . Verb [0,0] VP -> . Verb NP [0,0] 11/16/2018

32 Scanner Given a state With a non-terminal to right of dot
That is a part-of-speech category If the next word in the input matches this part-of-speech Create a new state with dot moved over the non-terminal So scanner looking at VP -> . Verb NP [0,0] If the next word, “book”, can be a verb, add new state: VP -> Verb . NP [0,1] Add this state to chart entry following current one Note: Earley algorithm uses top-down input to disambiguate POS! Only POS predicted by some state can get added to chart! 11/16/2018

33 Completer Applied to a state when its dot has reached right end of role. Parser has discovered a category over some span of input. Find and advance all previous states that were looking for this category copy state, move dot, insert in current chart entry Given: NP -> Det Nominal . [1,3] VP -> Verb. NP [0,1] Add VP -> Verb NP . [0,3] 11/16/2018

34 Earley: how do we know we are done?
How do we know when we are done?. Find an S state in the final column that spans from 0 to n+1 and is complete. If that’s the case you’re done. S –> α · [0,n+1] 11/16/2018

35 Earley So sweep through the table from 0 to n+1…
New predicted states are created by starting top-down from S New incomplete states are created by advancing existing states as new constituents are discovered New complete states are created in the same way. 11/16/2018

36 Earley More specifically… Predict all the states you can upfront
Read a word Extend states based on matches Add new predictions Go to 2 Look at N+1 to see if you have a winner 11/16/2018

37 Example Book that flight
We should find… an S from 0 to 3 that is a completed state… 11/16/2018

38 Example 11/16/2018

39 Example 11/16/2018

40 Earley example cont’d 11/16/2018

41 What is it? What kind of parser did we just describe (trick question).
Earley parser… yes Not a parser – a recognizer The presence of an S state with the right attributes in the right place indicates a successful recognition. But no parse tree… no parser That’s how we solve (not) an exponential problem in polynomial time 11/16/2018

42 Converting Earley from Recognizer to Parser
With the addition of a few pointers we have a parser Augment the “Completer” to point to where we came from. 11/16/2018

43 Augmenting the chart with structural information
11/16/2018

44 Retrieving Parse Trees from Chart
All the possible parses for an input are in the table We just need to read off all the backpointers from every complete S in the last column of the table Find all the S -> X . [0,N+1] Follow the structural traces from the Completer Of course, this won’t be polynomial time, since there could be an exponential number of trees So we can at least represent ambiguity efficiently 11/16/2018

45 Earley and Left Recursion
Earley solves the left-recursion problem without having to alter the grammar or artificially limiting the search. Never place a state into the chart that’s already there Copy states before advancing them 11/16/2018

46 Earley and Left Recursion: 1
S -> NP VP NP -> NP PP Predictor, given first rule: S -> · NP VP [0,0] Predicts: NP -> · NP PP [0,0] stops there since predicting same again would be redundant 11/16/2018

47 Earley and Left Recursion: 2
When a state gets advanced make a copy and leave the original alone… Say we have NP -> · NP PP [0,0] We find an NP from 0 to 2 so we create NP -> NP · PP [0,2] But we leave the original state as is 11/16/2018

48 Dynamic Programming Approaches
Earley Top-down, no filtering, no restriction on grammar form CYK Bottom-up, no filtering, grammars restricted to Chomsky-Normal Form (CNF) Details are not important... Bottom-up vs. top-down With or without filters With restrictions on grammar form or not 11/16/2018

49 How to do parse disambiguation
Probabilistic methods Augment the grammar with probabilities Then modify the parser to keep only most probable parses And at the end, return the most probable parse 11/16/2018

50 Probabilistic CFGs The probabilistic model
Assigning probabilities to parse trees Getting the probabilities for the model Parsing with probabilities Slight modification to dynamic programming approach Task is to find the max probability tree for an input 11/16/2018

51 Probability Model Attach probabilities to grammar rules
The expansions for a given non-terminal sum to 1 VP -> Verb .55 VP -> Verb NP .40 VP -> Verb NP NP .05 Read this as P(Specific rule | LHS) 11/16/2018

52 Probability Model (1) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a tree is just the product of the probabilities of the rules in the derivation. 11/16/2018

53 Probability Model (1.1) The probability of a word sequence (sentence) is the probability of its tree in the unambiguous case. It’s the sum of the probabilities of the trees in the ambiguous case. 11/16/2018

54 Getting the Probabilities
From an annotated database (a treebank) So for example, to get the probability for a particular VP rule just count all the times the rule is used and divide by the number of VPs overall. 11/16/2018

55 Assumptions We’re assuming that there is a grammar to be used to parse with. We’re assuming the existence of a large robust dictionary with parts of speech We’re assuming the ability to parse (i.e. a parser) Given all that… we can parse probabilistically 11/16/2018

56 Typical Approach Bottom-up (CYK) dynamic programming approach
Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up 11/16/2018

57 Parsing with Early Algorithm
New predicted states are based on existing table entries (predicted or in-progress) that predict a certain constituent at that spot. New in-progress states are created by updating older states to reflect the fact that the previously expected completed constituents have been located. New complete states are created when the dot in an in-progress state moves to the end. 11/16/2018

58 More Specifically 1. Predict all the states
2. Read an input. See what predictions you can match Extend matched states, add new predictions Go to next state (state 2) 3. At the end, see if state[N+1] contains a complete S 11/16/2018

59 A Simple English Grammar (Ex.)
S  NP VP Det  that | this | a | the S  Aux NP VP Noun  flight | meal | money S  VP Verb  book | include | prefer NP  Det NOM Aux  does NP  ProperNoun NOM  Noun ProperNoun  Houston | TWA NOM  Noun NOM VP  Verb VP  Verb NP 11/16/2018

60 Example: Chart[0]    S [0,0] Dummy start state
S   NP VP [0,0] Predictor NP   Det NOM [0,0] Predictor NP   ProperNoun [0,0] Predictor S   Aux NP VP [0,0] Predictor S   VP [0,0] Predictor VP   Verb [0,0] Predictor VP   Verb NP [0,0] Predictor 11/16/2018

61 Example: Chart[1] Verb  book  [0,1] Scanner
VP  Verb  [0,1] Completer S  VP  [0,1] Completer VP  Verb  NP [0,1] Completer NP   Det NOM [1,1] Predictor NP   ProperNoun [1,1] Predictor 11/16/2018

62 Example: Chart[2] Det  that  [1,2] Scanner
NP  Det  NOM [1,2] Completer NOM   Noun [2,2] Predictor NOM   Noun NOM [2,2] Predictor 11/16/2018

63 Example: Chart[3] Noun  flight  [2,3] Scanner
NOM  Noun  [2,3] Completer NOM  Noun  NOM [2,3] Completer NP  Det NOM  [1,3] Completer VP  Verb NP  [0,3] Completer S  VP  [0,3] Completer NOM   Noun [3,3] Predictor NOM   Noun NOM [3,3] Predictor 11/16/2018

64 Earley Algorithm The Earley algorithm has three main functions that do all the work. Predictor: Adds predictions into the chart. It is activated when the dot (in a state) is in the front of a non-terminal which is not a part of speech. Completer: Moves the dot to the right when new constituents are found. It is activated when the dot is at the end of a state. Scanner: Reads the input words and enters states representing those words into the chart. It is activated when the dot (in a state) is in the front of a non-terminal which is a part of speech. The Early algorithm uses theses functions to maintain the chart. 11/16/2018

65 Predictor procedure PREDICTOR((A    B , [i,j]))
for each (B  ) in GRAMMAR-RULES-FOR(B,grammar) do ENQUEUE((B   , [j,j]), chart[j]) end 11/16/2018

66 Completer procedure COMPLETER((B    , [j,k]))
for each (A    B , [i,j]) in chart[j] do ENQUEUE((A   B  , [i,k]), chart[k]) end 11/16/2018

67 Scanner procedure SCANNER((A    B , [i,j]))
if (B  PARTS-OF-SPEECH(word[j]) then ENQUEUE((B  word[j]  , [j,j+1]), chart[j+1]) end 11/16/2018

68 Enqueue procedure ENQUEUE(state,chart-entry)
if state is not already in chart-entry then Add state at the end of chart-entry) end 11/16/2018

69 Early Code function EARLY-PARSE(words,grammar) returns chart
ENQUEUE((   S, [0,0], chart[0]) for i from 0 to LENGTH(words) do for each state in chart[i] do if INCOMPLETE?(state) and NEXT-CAT(state) is not a PS then PREDICTOR(state) elseif INCOMPLETE?(state) and NEXT-CAT(state) is a PS then SCANNER(state) else COMPLETER(state) end return(chart) 11/16/2018

70 Retrieving Parse Trees from A Chart
To retrieve parse trees from a chart, the representation of each state must be augmented with an additional field to store information about the completed states that generated its constituents. To collect parse trees, we have to update COMPLETER such that it should add a pointer to the older state onto the list of previous-states of the new state. Then, the parse tree can be created by retrieving these list of previous-states (starting from the completed state of S). 11/16/2018

71 Chart[0] - with Parse Tree Info
S0    S [0,0] [] Dummy start state S1 S   NP VP [0,0] [] Predictor S2 NP   Det NOM [0,0] [] Predictor S3 NP   ProperNoun [0,0] [] Predictor S4 S   Aux NP VP [0,0] [] Predictor S5 S   VP [0,0] [] Predictor S6 VP   Verb [0,0] [] Predictor S7 VP   Verb NP [0,0] [] Predictor 11/16/2018

72 Chart[1] - with Parse Tree Info
S8 Verb  book  [0,1] [] Scanner S9 VP  Verb  [0,1] [S8] Completer S10 S  VP  [0,1] [S9] Completer S11 VP  Verb  NP [0,1] [S8] Completer S12 NP   Det NOM [1,1] [] Predictor S13 NP   ProperNoun [1,1] [] Predictor 11/16/2018

73 Chart[2] - with Parse Tree Info
S14 Det  that  [1,2] [] Scanner S15 NP  Det  NOM [1,2] [S14] Completer S16 NOM   Noun [2,2] [] Predictor S17 NOM   Noun NOM [2,2] [] Predictor 11/16/2018

74 Chart[3] - with Parse Tree Info
S18 Noun  flight  [2,3] [] Scanner S19 NOM  Noun  [2,3] [S18] Completer S20 NOM  Noun  NOM [2,3] [S18] Completer S21 NP  Det NOM  [1,3] [S14,S19] Completer S22 VP  Verb NP  [0,3] [S8,S21] Completer S23 S  VP  [0,3] [S22] Completer S24 NOM   Noun [3,3] [] Predictor S25 NOM   Noun NOM [3,3] [] Predictor 11/16/2018

75 Global Ambiguity S  Verb S  Noun Chart[0]
S0    S [0,0] [] Dummy start state S1 S   Verb [0,0] [] Predictor S2 S   Noun [0,0] [] Predictor Chart[1] S3 Verb  book  [0,1] [] Scanner S4 Noun  book  [0,1] [] Scanner S5 S  Verb  [0,1] [S3] Predictor S6 S  Noun  [0,1] [S4] Predictor 11/16/2018

76 Statistical Parse Disambiguation
Problem: How do we disambiguate among a set of parses of a given sentence? We want to pick the parse tree that corresponds to the correct meaning. Possible Solutions: Pass the problem onto Semantic Processing Use principle-based disambiguation methods. Use a probabilistic model to assign likelihoods to the alternative parse trees and select the best one (or at least rank them). Associating probabilities with the grammar rules gives us such a model. 11/16/2018

77 Probabilistic CFGs Associate a probability with each grammar rule.
The probability reflects relative likelihood of using the rule in generating the LHS constituent. Assume for a constituent C we have k grammar rules of form C i. We are interested in calculating P(C i|C) -- the probability of using rule i for deriving C. Such probabilities can be estimated from a corpus of parse trees: 11/16/2018

78 Probabilistic CFGs (cont.)
Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb .55 VP -> Verb NP .40 VP -> Verb NP NP .05 11/16/2018

79 Assigning Probabilities to Parse Trees
Assume that probability of a constituent is independent of context in which it appears in the parse tree. Probability of a constituent C’ that was constructed from A1’,…,An’ using the rule C A1,…,An is: P(C’)=P(C A1,…,An|C) P(A1’) … P(An’) At the leafs of the tree, we use the POS probabilities P(C|wi). 11/16/2018

80 Assigning Probabilities to Parse Trees (cont.)
A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a derivation (tree) is just the product of the probabilities of the rules in the derivation. 11/16/2018

81 Assigning Probabilities to Parse Trees (Ex. Grammar)
S -> NP VP S -> VP NP -> Noun 1.0 VP -> Verb 0.3 VP -> Verb NP 0.7 Noun -> book 0.2 . Verb-> book 0.1 11/16/2018

82 Parse Trees for An Input: book book
[S [NP [Noun book]] [VP [Verb book]]] P([Noun book])=P(Noun->book)=0.1 P([Verb book])=P(Verb->book)=0.2 P([NP [Noun book]])=P(NP->Noun)P([Noun book])=1.0*0.1=0.1 P([VP [Verb book]])=P(VP->Verb)P([Verb book])=0.3*0.2=0.06 P [S [NP [Noun book]] [VP [Verb book]]]) =P(S->NP VP)*0.1*0.06=0.6*0.1*0.06=0.0036 [S [VP [Verb book] [NP [Noun book]]]] P([VP [Verb book] [NP [Noun book]]])=P(VP->Verb NP)*0.2*0.1=0.7*0.2*0.1=0.014 P([S [VP [Verb book] [NP [Noun book]]]])=P(S->VP)*0.014=0.4*.014=0.0056 11/16/2018

83 Problems with Probabilistic CFG Models
Main problem with Probabilistic CFG Model: it does not take contextual effects into account. Example: Pronouns are much more likely to appear in the subject position of a sentence than an object position. But in a PCFG, the rule NPPronoun has only one probability. One simple possible extension -- make probabilities dependent on first word of the constituent. Instead of P(C i|C), use P(C i|C,w) where w is the first word in C. Example: the rule VP  V NP PP is used 93% of the time with the verb put, but only 10% of the time for like. Requires estimating a much larger set of probabilities, and can significantly improve disambiguation performance. 11/16/2018

84 Probabilistic Lexicalized CFGs
A solution to some of the problems with Probabilistic CFGs is to use Probabilistic Lexicalized CFGs. Use the probabilities of particular words in the computation of the probabilities in the derivation 11/16/2018

85 Example 11/16/2018

86 How to find the probabilities?
We used to have VP -> V NP PP P(r|VP) That’s the count of this rule divided by the number of VPs in a treebank Now we have VP(dumped)-> V(dumped) NP(sacks)PP(in) P(r|VP ^ dumped is the verb ^ sacks is the head of the NP ^ in is the head of the PP) Not likely to have significant counts in any treebank 11/16/2018

87 Subcategorization When stuck, exploit independence and collect the statistics you can… We’ll focus on capturing two things Verb subcategorization Particular verbs have affinities for particular VPs Objects affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others Condition particular VP rules on their head… so r: VP -> V NP PP P(r|VP) Becomes P(r | VP ^ dumped) What’s the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total 11/16/2018

88 Atomic Subcat Symbols VP  Verb <VP HEAD> = <Verb HEAD>
<VP HEAD SUBCAT> = INTRANS VP  Verb NP <VP HEAD SUBCAT> = TRANS VP  Verb NP NP <VP HEAD SUBCAT> = DITRANS Verb  slept <Verb HEAD SUBCAT> = INTRANS Verb  served <Verb HEAD SUBCAT> = TRANS Verb  gave <Verb HEAD SUBCAT> = DITRANS 11/16/2018

89 Encoding Subcat Lists as Features
Verb  gave <Verb HEAD SUBCAT FIRST CAT> = NP <Verb HEAD SUBCAT SECOND CAT> = NP <Verb HEAD SUBCAT THIRD> = END VP  Verb NP NP <VP HEAD> = <Verb HEAD> <VP HEAD SUBCAT FIRST CAT> = <NP CAT> <VP HEAD SUBCAT SECOND CAT> = <NP CAT> <VP HEAD SUBCAT THIRD> = END We are only encoding lists using positional features 11/16/2018

90 Minimal Rule Approach In fact, we do not use symbols like SECOND, THIRD. They are just used to encode lists. We can use lists directly (similar to LISP). <SUBCAT FIRST CAT> = NP <SUBCAT REST FIRST CAT> = NP <SUBCAT REST REST> = END 11/16/2018

91 Subcategorization Frames for Lexical Entries
We can use two different notations to represent subcategorization frames for lexical entries (verbs). Verb  want <Verb HEAD SUBCAT FIRST CAT> = NP <Verb HEAD SUBCAT FIRST CAT> = VP <Verb HEAD SUBCAT FIRST FORM> = INFINITITIVE 11/16/2018

92 Implementing Unification
The representation we have used cannot facilitate the destructive merger aspect of unification algorithm. For this reason, we add additional features (additional edges to DAGs) into our feature structures. Each feature structure will consists of two fields: Content Field -- This field can be NULL or may contain ordinary feature structure. Pointer Field -- This field can be NULL or may contain a pointer into another feature structure. If the pointer field of a DAG is NULL, the content field of DAG contains the actual feature structure to be processed. If the pointer field of a DAG is not NULL, the destination of that pointer represents the actual feature structure to be processed. 11/16/2018

93 Extended Feature Structures
11/16/2018

94 Extended DAG C P Num Per Null 3 SG 11/16/2018

95 Unification of Extended DAGs
C P Num Null SG C P Per Null 3 11/16/2018

96 Unification of Extended DAGs (cont.)
C P Num Null SG Per 3 11/16/2018

97 Unification Algorithm
function UNIFY(f1,f2) returns fstructure or failure f1real  real contents of f1 /* dereference f1 */ f2real  real contents of f2 /* dereference f2 */ if f1real is Null then { f1.pointer  f2; return f2; } else if f2real is Null then { f2.pointer  f1; return f1; } else if f1real and f2real are identical then { f1.pointer  f2; return f2; } else if f1real and f2real are complex feature structures then { f2.pointer  f1; for each feature in f2real do { otherfeature  Find or create a feature corresponding to feature in f1real; if UNIFY(feature.value,otherfeature.value) returns failure then return failure; } return f1; } else return failure; 11/16/2018

98 Example - Unification of Complex Structures
11/16/2018

99 Example - Unification of Complex Structures (cont.)
Null C Agr Num SG Sub Per 3 11/16/2018

100 Parsing with Unification Constraints
Let us assume that we have augmented our grammar with sets of unification constraints. What changes do we need to make a parser to make use of them? Building feature structures and associate them with sub-trees. Unifying feature structures when sub-trees are created. Blocking ill-formed constituents 11/16/2018

101 Earley Parsing with Unification Constraints
What do we have to do to integrate unification constraints with Early Parser? Building feature structures (represented as DAGs) and associate them with states in the chart. Unifying feature structures as states are advanced in the chart. Blocking ill-formed states from entering the chart. The main change will be in COMPLETER function of Earley Parser. This routine will invoke the unifier to unify two feature structures. 11/16/2018

102 Building Feature Structures
NP  Det NOMINAL <Det HEAD AGREEMENT> = <NOMINAL HEAD AGREEMENT> <NP HEAD> = <NOMINAL HEAD> corresponds to 11/16/2018

103 Augmenting States with DAGs
Each state will have an additional field to contain the DAG representing the feature structure corresponding to the state. When a rule is first used by PREDICTOR to create a state, the DAG associated with the state will simply consist of the DAG retrieved from the rule. For example, S   NP VP, [0,0],[],Dag1 where Dag1 is the feature structure corresponding to S  NP VP. NP   Det NOMINAL, [0,0],[],Dag2 where Dag2 is the feature structure corresponding to S  Det NOMINAL. 11/16/2018

104 What does COMPLETER do? When COMPLETER advances the dot in a state, it should unify the feature structure of the newly completed state with the appropriate part of the feature structure being advanced. If this unification process is succesful, the new state gets the result of the unification as its DAG, and this new state is entered into the chart. If it fails, nothing is entered into the chart. 11/16/2018

105 A Completion Example Parsing the phrase that flight after that is processed. NP  Det  NOMINAL, [0,1],[SDet],Dag1 Dag1 A newly completed state NOMINAL  Noun , [1,2],[SNoun],Dag2 Dag2 To advance in NP, the parser unifies the feature structure found under the NOMINAL feature of Dag2, with the feature structure found under the NOMINAL feature of Dag1. 11/16/2018

106 Earley Parse function EARLY-PARSE(words,grammar) returns chart
ENQUEUE((   S, [0,0], chart[0],dag) for i from 0 to LENGTH(words) do for each state in chart[i] do if INCOMPLETE?(state) and NEXT-CAT(state) is not a PS then PREDICTOR(state) elseif INCOMPLETE?(state) and NEXT-CAT(state) is a PS then SCANNER(state) else COMPLETER(state) end return(chart) 11/16/2018

107 Predictor and Scanner procedure PREDICTOR((A    B , [i,j],dagA))
for each (B  ) in GRAMMAR-RULES-FOR(B,grammar) do ENQUEUE((B   , [i,j],dagB), chart[j]) end procedure SCANNER((A    B , [i,j],dagA)) if (B  PARTS-OF-SPEECH(word[j]) then ENQUEUE((B  word[j]  , [j,j+1],dagB), chart[j+1]) 11/16/2018

108 Completer and UnifyStates
procedure COMPLETER((B    , [j,k],dagB)) for each (A    B , [i,j],dagA) in chart[j] do if newdag  UNIFY-STATES(dagB,dagA,B)  fails then ENQUEUE((A   B  , [i,k],newdag), chart[k]) end procedure UNIFY-STATES(dag1,dag2,cat) dag1cp  CopyDag(dag1); dag2cp  CopyDag(dag2); UNIFY(FollowPath(cat,dag1cp),FollowPath(cat,dag2cp)); 11/16/2018

109 Enqueue procedure ENQUEUE(state,chart-entry)
if state is not subsumed by a state in chart-entry then Add state at the end of chart-entry end 11/16/2018


Download ppt "Grammars August 31, 2005 11/16/2018."

Similar presentations


Ads by Google