Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005.

Similar presentations


Presentation on theme: "Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005."— Presentation transcript:

1 Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005

2 Lecture 3, 7/27/2005Natural Language Processing2 Morphology Study of the rules that govern the combination of morphemes. Inflection: same word, different syntactic information  Run/runs/running, book/books Derivation: new word, different meaning  Often different part of speech, but not always  Possible/possibly/impossible, happy/happiness Compounding: new word, each part is a word  Blackboard, bookshelf  lAlakamala, banabAsa

3 3 Morphology Level: The Mapping Formally: A +    2 (L,C 1,C 2,...,Cn)  A is the alphabet of phonemes (A + denotes any non-empty sequence of phonemes)  L is the set of possible lemmas, uniquely identified  C i are morphological categories, such as:  grammatical number, gender, case  person, tense, negation, degree of comparison, voice, aspect,...  tone, politeness,...  part of speech (not quite morphological category, but...)  A, L and C i are obviously language-dependent

4 Lecture 3, 7/27/2005Natural Language Processing4 Bengali/Hindi Inflectional Morphology Certain languages encode more syntax in morphology than in syntax Some of inflectional suffixes that nouns can have:  singular/plural :  Gender  possessive markers :  case markers :  Different Karakas Inflectional suffixes that verbs can have:  Hindi: Tense, aspect, modality, person, gender, number  Bengali: Tense, aspect, modality, person Order among inflectional suffixes (morphotactics )  Chhelederke  baigulokei

5 Lecture 3, 7/27/2005Natural Language Processing5 Bengali/ Hindi Derivational Morphology Derivational morphology is very rich.

6 Lecture 3, 7/27/2005Natural Language Processing6 English Inflectional Morphology Nouns have simple inflectional morphology.  plural -- cat / cats  possessive -- John / John’s Verbs have slightly more complex inflectional, but still relatively simple inflectional morphology.  past form -- walk / walked  past participle form -- walk / walked  gerund -- walk / walking  singular third person -- walk / walks Verbs can be categorized as:  main verbs  modal verbs -- can, will, should  primary verbs -- be, have, do Regular and irregular verbs: walk / walked -- go / went

7 Lecture 3, 7/27/2005Natural Language Processing7 Regulars and Irregulars Some words misbehave (refuse to follow the rules)  Mouse/mice, goose/geese, ox/oxen  Go/went, fly/flew The terms regular and irregular will be used to refer to words that follow the rules and those that don’t.

8 Lecture 3, 7/27/2005Natural Language Processing8 Regular and Irregular Verbs Regulars…  Walk, walks, walking, walked, walked Irregulars  Eat, eats, eating, ate, eaten  Catch, catches, catching, caught, caught  Cut, cuts, cutting, cut, cut

9 Lecture 3, 7/27/2005Natural Language Processing9 Derivational Morphology  Quasi-systematicity  Irregular meaning change  Changes of word class Some English derivational affixes  - ation : transport / transportation  -er : kill / killer  -ness : fuzzy / fuzziness  -al : computation / computational  -able : break / breakable  -less : help / helpless  un : do / undo  re : try / retry renationalizationability

10 Lecture 3, 7/27/2005Natural Language Processing10 Derivational Examples Verb/Adj to Noun -ationcomputerizecomputerization -eeappointappointee -erkillkiller -nessfuzzyfuzziness

11 Lecture 3, 7/27/2005Natural Language Processing11 Derivational Examples Noun/Verb to Adj -alComputationComputational -ableEmbraceEmbraceable -lessClueClueless

12 Lecture 3, 7/27/2005Natural Language Processing12 Compute Many paths are possible… Start with compute  Computer -> computerize -> computerization  Computation -> computational  Computer -> computerize -> computerizable  Compute -> computee

13 Lecture 3, 7/27/2005Natural Language Processing13 Parts of A Morphological Processor For a morphological processor, we need at least followings:  Lexicon : The list of stems and affixes together with basic information about them such as their main categories (noun, verb, adjective, …) and their sub-categories (regular noun, irregular noun, …).  Morphotactics : The model of morpheme ordering that explains which classes of morphemes can follow other classes of morphemes inside a word.  Orthographic Rules (Spelling Rules) : These spelling rules are used to model changes that occur in a word (normally when two morphemes combine).

14 Lecture 3, 7/27/2005Natural Language Processing14 Lexicon A lexicon is a repository for words (stems). They are grouped according to their main categories.  noun, verb, adjective, adverb, … They may be also divided into sub-categories.  regular-nouns, irregular-singular nouns, irregular-plural nouns, … The simplest way to create a morphological parser, put all possible words (together with its inflections) into a lexicon.  We do not this because their numbers are huge (theoretically for Turkish, it is infinite)

15 Lecture 3, 7/27/2005Natural Language Processing15 Morphotactics Which morphemes can follow which morphemes. Lexicon: regular-noun irregular-pl-noun irreg-sg-noun plural fox geesegoose -s cat sheepsheep dog mice mouse Simple English Nominal Inflection (Morphotactic Rules) 0 1 2 reg-noun plural (-s) irreg-sg-noun irreg-pl-noun

16 Lecture 3, 7/27/2005Natural Language Processing16 Combine Lexicon and Morphotactics f o x s c at d og s h e e p g o e e o s e m ous i c e This only says yes or no. Does not give lexical representation. It accepts a wrong word (foxs).

17 Lecture 3, 7/27/2005Natural Language Processing17 FSAs and the Lexicon This will actual require a kind of FSA : the Finite State Transducer (FST) We will give a quick overview First we’ll capture the morphotactics  The rules governing the ordering of affixes in a language. Then we’ll add in the actual words

18 Lecture 3, 7/27/2005Natural Language Processing18 Simple Rules

19 Lecture 3, 7/27/2005Natural Language Processing19 Adding the Words

20 Lecture 3, 7/27/2005Natural Language Processing20 Derivational Rules

21 Lecture 3, 7/27/2005Natural Language Processing21 Parsing/Generation vs. Recognition Recognition is usually not quite what we need.  Usually if we find some string in the language we need to find the structure in it (parsing)  Or we have some structure and we want to produce a surface form (production/generation) Example  From “ cats” to “ cat +N +PL”

22 Lecture 3, 7/27/2005Natural Language Processing22 Why care about morphology? `Stemming’ in information retrieval  Might want to search for “aardvark” and find pages with both “aardvark” and “aardvarks” Morphology in machine translation  Need to know that the Spanish words quiero and quieres are both related to querer ‘want’ Morphology in spell checking  Need to know that misclam and antiundoggingly are not words despite being made up of word parts

23 Lecture 3, 7/27/2005Natural Language Processing23 Can’t just list all words Turkish word Uygarlastiramadiklarimizdanmissinizcasin `(behaving) as if you are among those whom we could not civilize’  Uygar `civilized’ +  las `become’ +  tir `cause’ +  ama `not able’ +  dik `past’ +  lar ‘plural’+  imiz ‘p1pl’ +  dan ‘abl’ +  mis ‘past’ +  siniz ‘2pl’ +  casina ‘as if’

24 Lecture 3, 7/27/2005Natural Language Processing24 Finite State Transducers The simple story  Add another tape  Add extra symbols to the transitions  On one tape we read “ cats ”, on the other we write “ cat +N +PL ”

25 Lecture 3, 7/27/2005Natural Language Processing25 Transitions c:c means read a c on one tape and write a c on the other +N:ε means read a +N symbol on one tape and write nothing on the other +PL:s means read +PL and write an s c:ca:at:t +N:ε + PL:s

26 Lecture 3, 7/27/2005Natural Language Processing26 Lexical to Intermediate Level

27 Lecture 3, 7/27/2005Natural Language Processing27 FST Properties FSTs are closed under: union, inversion, and composition. union : The union of two regular relations is also a regular relation. inversion : The inversion of a FST simply switches the input and output labels.  This means that the same FST can be used for both directions of a morphological processor. composition : If T 1 is a FST from I 1 to O 1 and T 2 is a FST from O 1 to O 2, then composition of T 1 and T 2 (T 1 oT 2 ) maps from I 1 to O 2. We use these properties of FSTs in the creation of the FST for a morphological processor.

28 Lecture 3, 7/27/2005Natural Language Processing28 A FST for Simple English Nominals reg-noun irreg-sg-noun irreg-pl-noun +N: є +S:# +PL:^s# +SG:# +PL:#

29 Lecture 3, 7/27/2005Natural Language Processing29 FST for stems A FST for stems which maps roots to their root-class reg-noun irreg-pl-noun irreg-sg-noun fox g o:e o:e segoose cat sheepsheep dog m o:i u:є s:c e mouse fox stands for f:f o:o x:x When these two transducers are composed, we have a FST which maps lexical forms to intermediate forms of words for simple English noun inflections. Next thing that we should handle is to design the FSTs for orthographic rules, and combine all these transducers.

30 Lecture 3, 7/27/2005Natural Language Processing30 Multi-Level Multi-Tape Machines A frequently use FST idiom, called cascade, is to have the output of one FST read in as the input to a subsequent machine. So, to handle spelling we use three tapes:  lexical, intermediate and surface We need one transducer to work between the lexical and intermediate levels, and a second (a bunch of FSTs) to work between intermediate and surface levels to patch up the spelling. +PL+Ngod sgod s #^god lexical intermediate surface

31 Lecture 3, 7/27/2005Natural Language Processing31 Lexical to Intermediate FST

32 Lecture 3, 7/27/2005Natural Language Processing32 Orthographic Rules We need FSTs to map intermediate level to surface level. For each spelling rule we will have a FST, and these FSTs run parallel. Some of English Spelling Rules:  consonant doubling -- 1-letter consonant doubled before ing/ed -- beg/begging  E deletion - Silent e dropped before ing and ed -- make/making  E insertion -- e added after s, z, x, ch, sh before s -- watch/watches  Y replacement -- y changes to ie before s, and to i before ed -- try/tries  K insertion -- verbs ending with vowel+c we add k -- panic/panicked We represent these rules using two-level morphology rules:  a => b / c__d rewrite a as b when it occurs between c and d.

33 Lecture 3, 7/27/2005Natural Language Processing33 FST for E-Insertion Rule E-insertion rule: є => e / {x,s,z}^ __ s#

34 Lecture 3, 7/27/2005Natural Language Processing34 Generating or Parsing with FST Lexicon and Rules

35 Lecture 3, 7/27/2005Natural Language Processing35 Accepting Foxes

36 Lecture 3, 7/27/2005Natural Language Processing36 Intersection We can intersect all rule FSTs to create a single FST. Intersection algorithm just takes the Cartesian product of states.  For each state q i of the first machine and q j of the second machine, we create a new state q ij  For input symbol a, if the first machine would transition to state q n and the second machine would transition to q m the new machine would transition to q nm.

37 Lecture 3, 7/27/2005Natural Language Processing37 Composition Cascade can turn out to be somewhat pain.  it is hard to manage all tapes  it fails to take advantage of restricting power of the machines So, it is better to compile the cascade into a single large machine. Create a new state (x,y) for every pair of states x є Q 1 and y є Q 2. The transition function of composition will be defined as follows: δ((x,y),i:o) = (v,z) if there exists c such that δ 1 (x,i:c) = v and δ 2 (y,c:o) = z

38 Lecture 3, 7/27/2005Natural Language Processing38 Intersect Rule FSTs lexical tape LEXICON-FST intermediate tape FST 1 … FST n surface tape => FST R = FST 1 ^ … ^ FST n

39 Lecture 3, 7/27/2005Natural Language Processing39 Compose Lexicon and Rule FSTs lexical tape LEXICON-FST intermediate tape surface tape FST R = FST 1 ^ … ^ FST n => LEXICON-FST o FST R lexical tape surface level

40 Lecture 3, 7/27/2005Natural Language Processing40 Porter Stemming Some applications (some informational retrieval applications) do not the whole morphological processor. They only need the stem of the word. A stemming algorithm (Port Stemming algorithm) is a lexicon-free FST. It is just a cascaded rewrite rules. Stemming algorithms are efficient but they may introduce errors because they do not use a lexicon.


Download ppt "Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005."

Similar presentations


Ads by Google