Chapter 3: Morphology and Finite State Transducer Heshaam Faili University of Tehran.

Slides:



Advertisements
Similar presentations
The Structure of Sentences Asian 401
Advertisements

Finite State Automata. A very simple and intuitive formalism suitable for certain tasks A bit like a flow chart, but can be used for both recognition.
CS Morphological Parsing CS Parsing Taking a surface input and analyzing its components and underlying structure Morphological parsing:
Computational Morphology. Morphology S.Ananiadou2 Outline What is morphology? –Word structure –Types of morphological operation – Levels of affixation.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
Finite-State Transducers Shallow Processing Techniques for NLP Ling570 October 10, 2011.
Statistical NLP: Lecture 3
Morphology Chapter 7 Prepared by Alaa Al Mohammadi.
BİL711 Natural Language Processing1 Morphology Morphology is the study of the way words are built from smaller meaningful units called morphemes. We can.
6/2/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
1 Regular Expressions and Automata September Lecture #2-2.
6/10/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 3 Giuseppe Carenini.
LIN3022 Natural Language Processing Lecture 3 Albert Gatt 1LIN3022 Natural Language Processing.
Stemming, tagging and chunking Text analysis short of parsing.
Computational language: week 9 Finish finite state machines FSA’s for modelling word structure Declarative language models knowledge representation and.
1 Morphological analysis LING 570 Fei Xia Week 4: 10/15/07 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Learning Bit by Bit Class 3 – Stemming and Tokenization.
Morphology See Harald Trost “Morphology”. Chapter 2 of R Mitkov (ed.) The Oxford Handbook of Computational Linguistics, Oxford (2004): OUP D Jurafsky &
Morphological analysis
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
CS 4705 Lecture 3 Morphology: Parsing Words. What is morphology? The study of how words are composed from smaller, meaning-bearing units (morphemes) –Stems:
Finite State Transducers The machine model we will study for morphological parsing is called the finite state transducer (FST) An FST has two tapes –input.
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
Introduction to English Morphology Finite State Transducers
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
Chapter 3. Morphology and Finite-State Transducers From: Chapter 3 of An Introduction to Natural Language Processing, Computational Linguistics, and Speech.
Morphology and Finite-State Transducers. Why this chapter? Hunting for singular or plural of the word ‘woodchunks’ was easy, isn’t it? Lets consider words.
Finite-state automata 3 Morphology Day 14 LING Computational Linguistics Harry Howard Tulane University.
Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005.
October 2006Advanced Topics in NLP1 CSA3050: NLP Algorithms Finite State Transducers for Morphological Parsing.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture4 1 August 2007.
Morphological Recognition We take each sub-lexicon of each stem class and we expand each arc (e.g. the reg-noun arc) with all the morphemes that make up.
Introduction Morphology is the study of the way words are built from smaller units: morphemes un-believe-able-ly Two broad classes of morphemes: stems.
Computational Linguistics Yoad Winter *General overview *Examples: Transducers; Stanford Parser; Google Translate; Word-Sense Disambiguation * Finite State.
1 CPE 480 Natural Language Processing Lecture 5: Parser Asst. Prof. Nuttanart Facundes, Ph.D.
LING 388: Language and Computers Sandiway Fong Lecture 22: 11/10.
10/8/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
Session 11 Morphology and Finite State Transducers Introduction to Speech Natural and Language Processing (KOM422 ) Credits: 3(3-0)
Ling 570 Day #3 Stemming, Probabilistic Automata, Markov Chains/Model.
Chapter 3: Morphology and Finite State Transducer
10. Parsing with Context-free Grammars -Speech and Language Processing- 발표자 : 정영임 발표일 :
Finite State Transducers
Finite State Transducers for Morphological Parsing
Words: Surface Variation and Automata CMSC Natural Language Processing April 3, 2003.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
CS 4705 Lecture 3 Morphology. What is morphology? The study of how words are composed of morphemes (the smallest meaning-bearing units of a language)
For Wednesday Read chapter 23 Homework: –Chapter 22, exercises 1,4, 7, and 14.
Rules, Movement, Ambiguity
CSA2050 Introduction to Computational Linguistics Parsing I.
CSA3050: Natural Language Algorithms Finite State Devices.
The Simplest NL Applications: Text Searching and Pattern Matching Read J & M Chapter 2.
November 2003CSA4050: Computational Morphology IV 1 CSA405: Advanced Topics in NLP Computational Morphology IV: xfst.
October 2007Natural Language Processing1 CSA3050: Natural Language Algorithms Words and Finite State Machinery.
1/11/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
November 2003Computational Morphology III1 CSA405: Advanced Topics in NLP Xerox Notation.
October 2004CSA3050 NLP Algorithms1 CSA3050: Natural Language Algorithms Morphological Parsing.
Finite State LanguagesCSE Intro to Cognitive Science1 The Computational Modeling of Language: Finite State Languages Lecture I: Slides 1-21 Lecture.
November 2003Computational Morphology VI1 CSA4050 Advanced Topics in NLP Non-Concatenative Morphology – Reduplication – Interdigitation.
NATURAL LANGUAGE PROCESSING
Two Level Morphology Alexander Fraser & Liane Guillou CIS, Ludwig-Maximilians-Universität München Computational Morphology.
BİL711 Natural Language Processing
CIS, Ludwig-Maximilians-Universität München Computational Morphology
عمادة التعلم الإلكتروني والتعليم عن بعد
Basic Parsing with Context Free Grammars Chapter 13
Morphology: Parsing Words
Speech and Language Processing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
Morphological Parsing
Presentation transcript:

Chapter 3: Morphology and Finite State Transducer Heshaam Faili University of Tehran

2 Morphology Morphology is the study of the internal structure of words morphemes: (roughly) minimal meaning-bearing unit in a language, smallest “building block” of words Morphological parsing is the task of breaking a word down into its component morphemes, i.e., assigning structure going  go + ing running  run + ing spelling rules are different from morphological rules Parsing can also provide us with an analysis going  go:VERB + ing:GERUND

3 Kinds of morphology Inflectional morphology = grammatical morphemes that are required for words in certain syntactic situations I run John runs -s is an inflectional morpheme marking the third person singular verb Derivational morphology = morphemes that are used to produce new words, providing new meanings and/or new parts of speech establish establishment -ment is a derivational morpheme that turns verbs into nouns

4 More on morphology We will refer to the stem of a word (main part) and its affixes (additions), which include prefixes, suffixes, infixes, and circumfixes Most inflectional morphological endings (and some derivational) are productive – they apply to every word in a given class -ing can attach to any verb (running, hurting) re- can attach to any verb (rerun, rehurt) Morphology is highly complex in more agglutinative languages like Persian and Turkish Some of the work of syntax in English is in the morphology in Turkish Shows that we can’t simply list all possible words

5 Overview A. Morphological recognition with finite- state automata (FSAs) B. Morphological parsing with finite-state transducers (FSTs) C. Combining FSTs D. More applications of FSTs

6 A. Morphological recognition with FSA Before we talk about assigning a full structure to a word, we can talk about recognizing legitimate words We have the technology to do this: finite-state automata (FSAs)

7 Overview of English verbal morphology 4 English regular verb forms: base, -s, -ing, -ed walk/walks/walking/walked merge/merges/merging/merged try/tries/trying/tried map/maps/mapping/mapped Generally productive forms English irregular verbs (~250): eat/eats/eating/ate/eaten catch/catches/catching/caught/caught cut/cuts/cutting cut/cut etc.

8 Analyzing English verbs For the -s, and –ing forms, both regular and irregular verbs use their base forms Irregulars differ in how they treat the past and the past participle forms

9 FSA for English verbal morphology (morphotactics) initial= 0; final ={1, 2, 3} 0->verb-past-irreg->3 0->vstem-reg->1 1->+past->3 1->+pastpart->3 0->vstem-reg->2 0->vstem-irreg->2 2->+prog->3 2  +sing->3 N.B. covers ‘morphotactics’, but not spelling rules (latter requires a separate FSA)

10 A Fun FSA Exercise: Isleta Morphology Consider the following data from Isleta, a dialect of Southern Tiwa, a Native American language spoken in New Mexico: [temiban] ‘I went’ [amiban] ‘you went’ [temiwe] ‘I am going’ [mimiay] ‘he was going’ [tewanban] ‘I came’ [tewanhi] ‘I will come’

11 Practising Isleta List the morphemes corresponding to the following English translations: ‘I’ ‘you’ ‘he’ ‘go’ ‘come’ +past +present_progressive +past_progressive +future What is the order of morphemes in Isleta? How would you say each of the following in Isleta? ‘He went’ ‘I will go’ ‘You were coming’

12 An FSA for Isleta Verbal Inflection initial =0; final ={3} 0->mi|te|a->1 1->mi|wan->2 2->ban|we|ay|hi->3

13 B. Morphological Parsing with FSTs Using a finite-state automata (FSA) to recognize a morphological realization of a word is useful But what if we also want to analyze that word? e.g. given cats, tell us that it’s cat + N + PL A finite-state transducer (FST) can give us the necessary technology to do this Two-level morphology: Lexical level: stem plus affixes Surface level: actual spelling/realization of the word Roughly, we’ll have the following for cats: c:c a:a t:t ε:+N s:+PL

14 Finite-State Transducers While an FSA recognizes (accept/reject) an input expression, it doesn’t produce any other output An FST, on the other hand, in addition produces an output expression  we define this in terms of relations So, an FSA is a recognizer, whereas an FST translates from one expression to another So, it reads from one tape, and writes to another tape (see Figure 3.8, p. 71) Actually, it can also read from the output tape and write to the input tape So, FST’s can be used for both analysis and generation (they are bidirectional)

15 Transducers and Relations Let’s pretend we want to translate from the Cyrillic alphabet to the Roman alphabet We can use a mapping table, such as: A : A Б : B Г : G Д : D etc. We define R = {,,,,..} We can thing of this as a relation R  Cyrillic X Roman To understand FSTs, we need to understand relations

16 The Cyrillic Transducer initial =0; final = {0} 0–>A:A-> 0 0->Б:B-> 0 0->Г:G-> 0 0->Д:D-> 0 …. Transducers implement a mapping defined by a relation R = {,,,,..} These relations are called regular relations (since each side expresses a regular expression)

17 FSAs and FSTs FSTs, then, are almost identical to FSAs … Both have: Q: a finite set of states q0: a designated start state F: a set of final states  : a transition function The difference: the alphabet (  ) for an FST is now comprised of complex symbols (e.g., X:Y) FSA:  = a finite alphabet of symbols FST:  = a finite alphabet of complex symbols, or pairs As a shorthand, if we have X:X, we can write this as X

18 FSTs for morphology For morphology, using FSTs means that we can: set up pairs between the surface level (actual realization) and the lexical level (stem/affixes) “c:c a:a t:t ε:+N s:+PL ” set up pairs to go from one form to another, i.e., the “underlying” base form maps to the plural “ g:g o:e o:e s:s e:e ” Can combine both kinds of information into the same FST: g:g o:o o:o s:s e:e ε:+N ε:+SG g:g o:e o:e s:s e:e ε:+N ε:+PL

19 Isleta Verbal Inflection I will go Surface: temihi Lexical: te+PRO+1P+mi+hi +FUTURE te   mi hi  te +PRO +1P mi hi +FUT Note that the cells have to line up across tapes. So, if an input symbol gives rise to more/less output symbols, epsilons have to be added to the input/output tape in the appropriate positions.

20 An FST for Isleta Verbal Inflection initial =0; final ={3} 0-> mi  :mi+PRO+3P |te  :te+PRO+1P |a  :a+PRO+2P ->1 1-> mi|wan ->2 2-> ban  :ban+PAST |we  :we+PRES+PROG |ay  :ay+PAST+PROG |hi  :hi+FUT ->3 Interpret te  :te+PRO+1P as shorthand for 3 separate arcs

21 A Lexical Transducer (Xerox) Remember that FSTs can be used in either direction l e a v e +VBZ : l e a v e s l e a v e +VB : l e a v e l e a v e +VBG : l e a v i n g l e a v e +VBD : l e f t l e a v e +NN : l e a v e l e a v e +NNS : l e a v e s l e a f +NNS : l e a v e s l e f t +JJ : l e f t Left-to-Right Input: leave+VBD (“upper language”) Output: left (“lower language”) Right-to-Left Input: leaves (lower language) Output: leave+NNS (upper language) leave+VBZ leaf+NNS

22 Transducer Example (Xerox) L1= [a-z]+. Consider language L2 that results from replacing any instances of "ab" in L1 by "x". So, to define the mapping, we define a relation R  L1 X L2 e.g., is in R. Note: “xacx" in lower language is paired with 4 strings in upper language, "abacab", "abacx", "xacab", & "xacx" NB: ? = [a-z]\{a,b,x}

23 C. Combining FSTs: Spelling Rules So far, we have gone from a lexical level (e.g., cat+N+PL) to a “surface level” (e.g., cats) But this surface level is actually an intermediate level  it doesn’t take spelling into account So, the lexical level of “fox+N+PL” corresponds to “fox^s” We will use “^” to refer to a morpheme boundary We need another level to account for spelling rules

24 Lexicon FST The lexicon FST will convert a lexical level to an intermediate form dog+N+PL  dog^s fox+N+PL  fox^s mouse+N+PL  mouse^s dog+V+SG  dog^s This will be of the form: 0-> f ->13-> +N:^ ->4 1-> o ->24-> +PL:s ->5 2-> x ->34-> +SG:ε ->6 And so on

25 English noun lexicon as a FST J&M Fig 3.9 J&M Fig 3.11 Expanding the aliases LEX-FST

26 LEX-FST Lets allow   to pad the tape Then, we won’t force both tapes to have same length Also, let’s pretend we’re generating cat+N+PL cat^s# goose+N+PL geese# fox+N+PL fox^S# Lexical Tape Intermediate Tape Morpheme Boundary Word-Final Boundary

27 Rule FST The rule FST will convert the intermediate form into the surface form dog^s  dogs (covers both N and V forms) fox^s  foxes mouse^s  mice Assuming we include other arcs for every other character, this will be of the form: 0-> f ->01-> ^:ε ->2 0 -> o ->02-> ε:e ->3 0 -> x -> 13-> s ->4 This FST is too impoverished …

28 Some English Spelling Rules Consonant doubling Beg / Begging / Begged E deletionMake / Making E insertionWatch / Watches Y replacementTry / Tries K InsertionPanic / Panicked

29 E-insertion FST J&M Fig 3.14, p. 78

30 E-insertion FST fox^s# foxes# Trace: generating foxes# from fox^s#: q0-f->q0-o->q0-x->q1-^:  ->q2-  :e->q3-s->q4-#->q0 generating foxs# from fox^s#: q0-f->q0-o->q0-x->q1-^:  ->q2-s->q5-#->FAIL generating salt# from salt#: q0-s->q1-a->q0-l->q0-t>q0-#->q0 parsing assess#: q0-a->q0-s->q1-s->q1-^:  ->q2-  :e->q3-s->q4-s->FAIL q0-a->q0-s->q1-s->q1-e->q0-s->q1-s->q1-#->q0 Intermediate Tape Surface Tape

31 Combining Lexicon and Rule FSTs We would like to combine these two FSTs, so that we can go from the lexical level to the surface level. How do we integrate the intermediate level? Cascade the FSTs: one after the other Compose the FSTs: combine the rules at each state

32 Cascading FSTs The idea of cascading FSTs is simple: Input1  FST1  Output1 Output1  FST2  Output2 The output of the first FST is run as the input of the second Since both FSTs are reversible, the cascaded FSTs are still reversible/bi- directional.

33 Composing FSTs We can compose each transition in one FST with a transition in another FST1: p0-> a:b -> p1p0-> d:e ->p1 FST2: q0-> b:c -> q1q0-> e:f -> q0 Composed FST: (p0,q0)-> a:c ->(p1,q1) (p0,q0)-> d:f ->(p1,q0) The new state names (e.g., (p0,q0)) seem somewhat arbitrary, but this ensures that two FSTs with different structures can still be composed e.g., a:b and d:e originally went to the same state, but now we have to distinguish those states Why doesn’t e:f loop anymore?

34 Composing FSTs for morphology With our lexical, intermediate, and surface levels, this means that we’ll compose: p2-> x ->p3 p4-> +PL:s ->p5 p3-> +N:^ ->p4 p4-> ε:ε ->p4 q0-> x ->q1 q2-> ε:e ->q3 q1-> ^:ε ->q2 q3-> s ->q4 into: (p2,q0)-> x ->(p3,q1) (p3,q1)-> +N:ε ->(p4,q2) (p4,q2)-> ε:e ->(p4,q3) (p4,q3)-> +PL:s ->(p4,q4)

35 Generating or Parsing with FST lexicon and rules

36 Lexicon-Free FST: Porter Stemmer Used for IR and Search Engine e.g. by search “Foxes” should relates “Fox” Stemming Lexicon-Free: Porter Algorithm ATIONAL -> ATE (relational -> relate) ING -> ε if stem contain vowel (motoring -> motor)

37 D. More applications of FSTs Syntactic parsing using FSTs approximate the actual structure (it won’t work in general for syntax) Noun Phrase (NP) parsing using FSTs also referred to as NP chunking, or partial parsing often used for prepositional phrases (PPs), too

38 Syntactic parsing using FSTs Parsing – more than recognition; returns a structure For syntactic recognition, FSA could be used How does syntax work? S  NP VPD  the NP  (D) NN  girlN  zebras VP  V NPV  saw How do we go about encoding this?

39 Syntactic Parsing using FSTs Thegirlsawzebras D NVN NP VP S Input FST 1: NPs FST 2: VPs FST 3: Ss FST1 initial=0; final ={2} 0->N:NP->2 0->D:  ->1 1->N-:NP->2 D N V N  NP V NP  NP  VP    S FST1 FST2 FST3

40 Syntactic structure with FSTs Note that the previous FSTs only output labels after the phrase has been completed. Where did the phrase start? To fully capture the structure of a sentence, we need an FST which delineates the beginning and the end of a phrase 0-> Det:NP-Start ->1 1-> N:NP-Finish ->2 Another FST can group the pieces into complete phrases

41 Why FSTs can’t always be used for syntax Syntax is infinite, but we have set up a finite number of levels (depth) to a tree with a finite number of FSTs Can still use FSTs, but (arguably) not as elegant The girl saw that zebras saw the giraffes. We have a VP over a VP and will have to run FST2 twice at different times. Furthermore, we begin to get very complex FST abbreviations— e.g., /Det? Adj* N PP*/—which don’t match linguistic structure Center-embedding constructions Allowed in languages like English Mathematically impossible to capture with finite-state methods

42 Center embedding Example: The man [(that) the woman saw] laughed. The man [Harry said [the woman saw]] laughed. S in the middle of another S Problem for FSA/FST technology: There’s no way for finite-state grammars to make sure that the number of NPs matches the number of verbs These are a n cb n constructions  not regular We have to use context-free grammars … a topic we’ll return to later in the course

43 Noun Phrase (NP) parsing using FSTs If we make the task more narrow, we can have more success – e.g., only parse NPs The man on the floor likes the woman who is a trapeze artist [The man] NP on [the floor] NP likes [the woman] NP who is [ a trapeze artist] NP Taking the NP chunker output at input, a PP chunker [The man] NP [on [the floor] NP ] PP likes [the woman] NP who is [ a trapeze artist] NP

44 Exercises 3.1, 3.4, 3.6 (3.3, 3.5 : new edition 2005) Write the Persian morphology (including name and verb) analyzer by perl See The document for morphological Analysis from Shiraz Project