Building a Semantic Parser Overnight
Overnight framework
Which country has the highest CO2 emissions? Which had the highest increase since last year? What fraction is from the five countries with highest GDP?
Training data
The data problem: The main database is 600 samples (GEO880) To compare: Labeled photos: millions מאיפה הדאטא הקיים מגיע?
Not only quantity: The data can lack critical functionality
The process Domain Seed lexicon Logical forms and canonical utterances Paraphrases Semantic parser
The data base: Triples (e1, p, e2) e1 and e2 are entities (e.g., article1, 2015) p is a property (e.g., publicationDate)
Seed lexicon For every property, a lexical entry of the form <t → s[p]> t is a natural language phrase and s is a syntactic category < “publication date” → RELNP[publicationDate]>
Seed lexicon In addition, L contains two typical entities for each semantic type in the database <alice → NP[alice]>
Unary TYPENP ENTITYNP Verb phrases VP ( “has a private bath”) Binaries: RELNP functional properties (e.g., “publication date”) VP/NP transitive verbs (“cites”, “is the president of”)
Grammar <α1 . . . αn → s[z]> α1 . . . αn tokens or categories, s is a syntactic category z is the logical form constructed
Grammar <RELNP[r] of NP[x] → NP[R(r).x]> Z: R(publicationDate).article1 C: “publication date of article 1”
Crowdsourcing X: “when was article 1 published?” D = {(x, c, z)} for each (z, c) ∈ GEN(G ∪ L) and x ∈ P(c)
Training log-linear distribution pθ(z, c | x, w)
Under the hood
Lambda DCS Entity: singleton set {e} Property: set of pairs (e1, e2)
Lambda DCS binary b and unary u join b.u 𝑒 2 ∈ 𝑢 𝑤 𝑒 1 , 𝑒 2 ∈ 𝑏 𝑤
Lambda DCS ¬u 𝑢 1 ∪ 𝑢 2 𝑢 1 ∩ 𝑢 2
Lambda DCS R(b) (e1, e2) ∈ [b] -> (e2, e1) ∈ [R(b)]
Lambda DCS count(u) sum(u) average(u, b) argmax(u, b)
Lambda DCS λx.u is a set of (e1, e2): e1 ∈ [u[x/e2]]w R(λx.count(R(cites).x)) (e1, e2), where e2 is the number of entities that e1 cites.
Seed lexicon for the SOCIAL domain
Seed lexicon article publication date cites won an award
Grammar Assumption 1 (Canonical compositionality): Using a small grammar, all logical forms expressible in natural language can be realized compositionally based on the logical form.
Grammar Functionality-driven Generate superlatives, comparatives, negation, and coordination
Grammar
Grammar From seed: types, entities, and properties noun phrases (NP) verbs phrases (VP) complementizer phrase (CP) “that cites Building a Semantic Parser Overnight” “that cites more than three article”
Grammar
Grammar
Grammar
Paraphrasing “meeting whose attendee is alice” ⇒ “meeting with alice” “author of article 1” ⇒ “who wrote article 1” “player whose number of points is 15” ⇒ “player who scored 15 points”
Paraphrasing “article that has the largest publication date ⇒ newest article”. “housing unit whose housing type is apartment ⇒ apartment” “university of student alice whose field of study is music” ⇒ “At which university did Alice study music?”, “Which university did Alice attend?”
Sublexical compositionality “parent of alice whose gender is female ⇒ mother of alice”. “person that is author of paper whose author is X ⇒ co-author of X” “person whose birthdate is birthdate of X ⇒ person born on the same day as X”. “meeting whose start time is 3pm and whose end time is 5pm ⇒ meetings between 3pm and 5pm” “that allows cats and that allows dogs ⇒ that allows pets” “author of article that article whose author is X cites ⇒ who does X cite”.
Crowdsourcing in numbers Each turker paraphrased 4 utterances 28 seconds on average per paraphrase 38,360 responses 26,098 examples remained
Paraphrasing noise in the data 17% noise in the data 17% (“player that has the least number of team ⇒ player with the lowest jersey number”) (“restaurant whose star rating is 3 stars ⇒ hotel which has a 3 star rating”).
Model and Learning numbers, dates, and database entities first
Model and Learning (z, c) ∈ GEN(G ∪ Lx) 𝑝 𝜃 ( z, c | x, w) ∝ exp(φ(c, z, x, w) >θ)
Floating parser
Floating parser
Floating parser
Floating parser
Model and Learning Features
Model and Learning 𝑥,𝑐,𝑧∈𝐷 𝑙𝑜𝑔 𝑝 θ 𝑧,𝑐 𝑥,𝑤 −𝜆 𝜃 1 𝑥,𝑐,𝑧∈𝐷 𝑙𝑜𝑔 𝑝 θ 𝑧,𝑐 𝑥,𝑤 −𝜆 𝜃 1 AdaGrad (Duchi et al., 2010)
Experimental Evaluation