Morphology: Words and their Parts CS 4705 Slides adapted from Jurafsky, Martin Hirschberg and Dorr.

Slides:



Advertisements
Similar presentations
Finite-state automata and Morphology
Advertisements

Jing-Shin Chang1 Morphology & Finite-State Transducers Morphology: the study of constituents of words Word = {a set of morphemes, combined in language-dependent.
CS Morphological Parsing CS Parsing Taking a surface input and analyzing its components and underlying structure Morphological parsing:
Natural Language Processing Lecture 3—9/3/2013 Jim Martin.
Computational Morphology. Morphology S.Ananiadou2 Outline What is morphology? –Word structure –Types of morphological operation – Levels of affixation.
Morphology.
CS 4705 Morphology: Words and their Parts CS 4705.
1 Morphology September 2009 Lecture #4. 2 What is Morphology? The study of how words are composed of morphemes (the smallest meaning-bearing units of.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
1 Morphology September 4, 2012 Lecture #3. 2 What is Morphology? The study of how words are composed of morphemes (the smallest meaning-bearing units.
Finite-State Transducers Shallow Processing Techniques for NLP Ling570 October 10, 2011.
Morphology, Phonology & FSTs Shallow Processing Techniques for NLP Ling570 October 12, 2011.
5/16/ ICS 482 Natural Language Processing Words & Transducers-Morphology - 1 Muhammed Al-Mulhem March 1, 2009.
Morphology Nuha Alwadaani.
Morphology Chapter 7 Prepared by Alaa Al Mohammadi.
Brief introduction to morphology
6/2/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
6/10/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 3 Giuseppe Carenini.
LIN3022 Natural Language Processing Lecture 3 Albert Gatt 1LIN3022 Natural Language Processing.
1 Morphological analysis LING 570 Fei Xia Week 4: 10/15/07 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Learning Bit by Bit Class 3 – Stemming and Tokenization.
Morphological analysis
Finite-State Morphology CMSC 723: Computational Linguistics I ― Session #3 Jimmy Lin The iSchool University of Maryland Wednesday, September 16, 2009.
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
CS 4705 Lecture 3 Morphology: Parsing Words. What is morphology? The study of how words are composed from smaller, meaning-bearing units (morphemes) –Stems:
CS 4705 Some slides adapted from Hirschberg, Dorr/Monz, Jurafsky.
1 CSC 594 Topics in AI – Applied Natural Language Processing Fall 2009/ Some Basic Concepts: Morphology.
CS 4705 Morphology: Words and their Parts CS 4705 Julia Hirschberg.
CS 4705 Morphology: Words and their Parts CS 4705.
Introduction to English Morphology Finite State Transducers
Chapter 3. Morphology and Finite-State Transducers From: Chapter 3 of An Introduction to Natural Language Processing, Computational Linguistics, and Speech.
Morphology and Finite-State Transducers. Why this chapter? Hunting for singular or plural of the word ‘woodchunks’ was easy, isn’t it? Lets consider words.
Morphology (CS ) By Mugdha Bapat Under the guidance of Prof. Pushpak Bhattacharyya.
Lemmatization Tagging LELA /20 Lemmatization Basic form of annotation involving identification of underlying lemmas (lexemes) of the words in.
Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 4 28 July 2005.
October 2006Advanced Topics in NLP1 CSA3050: NLP Algorithms Finite State Transducers for Morphological Parsing.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture4 1 August 2007.
Morphological Recognition We take each sub-lexicon of each stem class and we expand each arc (e.g. the reg-noun arc) with all the morphemes that make up.
Introduction Morphology is the study of the way words are built from smaller units: morphemes un-believe-able-ly Two broad classes of morphemes: stems.
LING 388: Language and Computers Sandiway Fong Lecture 22: 11/10.
10/8/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
Session 11 Morphology and Finite State Transducers Introduction to Speech Natural and Language Processing (KOM422 ) Credits: 3(3-0)
Lecture 3, 7/27/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 3 27 July 2005.
Finite State Transducers for Morphological Parsing
Morphology A Closer Look at Words By: Shaswar Kamal Mahmud.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 3 27 July 2007.
Morphological Analysis Chapter 3. Morphology Morpheme = "minimal meaning-bearing unit in a language" Morphology handles the formation of words by using.
Chapter III morphology by WJQ. Morphology Morphology refers to the study of the internal structure of words, and the rules by which words are formed.
CS 4705 Lecture 3 Morphology. What is morphology? The study of how words are composed of morphemes (the smallest meaning-bearing units of a language)
CSA3050: Natural Language Algorithms Finite State Devices.
Natural Language Processing Chapter 2 : Morphology.
1/11/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 2 Giuseppe Carenini.
October 2004CSA3050 NLP Algorithms1 CSA3050: Natural Language Algorithms Morphological Parsing.
Chapter 3 Word Formation I This chapter aims to analyze the morphological structures of words and gain a working knowledge of the different word forming.
Two Level Morphology Alexander Fraser & Liane Guillou CIS, Ludwig-Maximilians-Universität München Computational Morphology.
CIS, Ludwig-Maximilians-Universität München Computational Morphology
Morphology Morphology Morphology Dr. Amal AlSaikhan Morphology.
Speech and Language Processing
Morphology: Parsing Words
Morphology.
Morphology: Words and their Parts
Natural Language Processing
CSCI 5832 Natural Language Processing
Speech and Language Processing
CSCI 5832 Natural Language Processing
LING 138/238 SYMBSYS 138 Intro to Computer Speech and Language Processing Dan Jurafsky 11/24/2018 LING 138/238 Autumn 2004.
By Mugdha Bapat Under the guidance of Prof. Pushpak Bhattacharyya
CPSC 503 Computational Linguistics
Morphological Parsing
Introduction to English morphology
Presentation transcript:

Morphology: Words and their Parts CS 4705 Slides adapted from Jurafsky, Martin Hirschberg and Dorr.

English Morphology Morphology is the study of the ways that words are built up from smaller meaningful units called morphemes We can usefully divide morphemes into two classes – Stems: The core meaning bearing units – Affixes: Bits and pieces that adhere to stems to change their meanings and grammatical functions

Nouns and Verbs (English) Nouns are simple (not really) – Markers for plural and possessive Verbs are only slightly more complex – Markers appropriate to the tense of the verb

Regulars and Irregulars Ok so it gets a little complicated by the fact that some words misbehave (refuse to follow the rules) – Mouse/mice, goose/geese, ox/oxen – Go/went, fly/flew The terms regular and irregular will be used to refer to words that follow the rules and those that don’t.

Regular and Irregular Nouns and Verbs Regulars… – Walk, walks, walking, walked, walked – Table, tables Irregulars – Eat, eats, eating, ate, eaten – Catch, catches, catching, caught, caught – Cut, cuts, cutting, cut, cut – Goose, geese

Why care about morphology? Spelling correction: referece – Morphology in machine translation Spanish words quiero and quieres are both related to querer ‘want’ – Hyphenation algorithms: refer-ence – Part-of-speech analysis: google, googler – Text-to-speech: grapheme-to-phoneme conversion hothouse (/T/ or /D/) – Allows us to guess at meaning ‘Twas brillig and the slithy toves… Muggles moogled migwiches

Concatenative Morphology Morpheme+Morpheme+Morpheme+… Stems: often called lemma, base form, root, lexeme – hope+ing hoping hop hopping Affixes – Prefixes: Antidisestablishmentarianism – Suffixes: Antidisestablishmentarianism – Infixes: hingi (borrow) – humingi (borrower) in Tagalog – Circumfixes: sagen (say) – gesagt (said) in German

What useful information does morphology give us? Different things in different languages – Spanish: hablo, hablar é / English: I speak, I will speak – English: book, books/ Japanese: hon, hon Languages differ in how they encode morphological information – Isolating languages (e.g. Cantonese) have no affixes: each word usually has 1 morpheme – Agglutinative languages (e.g. Finnish, Turkish) are composed of prefixes and suffixes added to a stem (like beads on a string) – each feature realized by a single affix, e.g. Finnish

epäjärjestelmällistyttämättömyydellänsäkäänköhän ‘Wonder if he can also... with his capability of not causing things to be unsystematic’ – Inflectional languages (e.g. English) merge different features into a single affix (e.g. ‘ s ’ in likes indicates both person and tense); and the same feature can be realized by different affixes – Polysynthetic languages (e.g. Inuit languages) express much of their syntax in their morphology, incorporating a verb ’ s arguments into the verb, e.g. Western Greenlandic Aliikusersuillammassuaanerartassagaluarpaalli. aliiku-sersu-i-llammas-sua-a-nerar-ta-ssa-galuar-paal-li entertainment-provide-SEMITRANS-one.good.at-COP-say.that-REP- FUT-sure.but-3.PL.SUBJ/3SG.OBJ-but 'However, they will say that he is a great entertainer, but...'SEMITRANSCOP FUTOBJ – So ….different languages may require very different morphological analyzers

What we want Something to automatically do the following kinds of mappings: Catscat +N +PL Cat cat +N +SG Citiescity +N +PL Mergingmerge +V +Present-participle Caughtcatch +V +past-participle

Morphology Can Help Define Word Classes AKA morphological classes, parts-of-speech Closed vs. open (function vs. content) class words – Pronoun, preposition, conjunction, determiner,… – Noun, verb, adverb, adjective,… Identifying word classes is useful for almost any task in NLP, from translation to speech recognition to topic detection…very basic semantics

(English) Inflectional Morphology Word stem + grammatical morpheme  different forms of same word – Usually produces word of same classclass – Usually serves a syntactic or grammatical function (e.g. agreement) like  likes or liked bird  birds Nominal morphology – Plural forms s or es Irregular forms (goose/geese)

Mass vs. count nouns (fish/fish(es), or s?) – Possessives (cat’s, cats’) Verbal inflection – Main verbs (sleep, like, fear) relatively regular -s, ing, ed And productive: ed, instant-messaged, faxed, homered But some are not: – eat/ate/eaten, catch/caught/caught – Primary (be, have, do) and modal verbs (can, will, must) often irregular and not productive Be: am/is/are/were/was/been/being – Irregular verbs few (~250) but frequently occurring

Derivational Morphology Word stem + syntactic/grammatical morpheme  new words – Usually produces word of different class – Incomplete process: derivational morphs cannot be applied to just any member of a class Verbs --> nouns – -ize verbs  -ation nouns – generalize, realize  generalization, realization – synthesize but not synthesization

Verbs, nouns  adjectives – embrace, pity  embraceable, pitiable – care, wit  careless, witless Adjective  adverb – happy  happily Process selective in unpredictable ways – Less productive: nerveless/*evidence-less, malleable/*sleep-able, rar-ity/*rareness – Meanings of derived terms harder to predict by rule – clueless, careless, nerveless, sleepless

Compounding Two base forms join to form a new word – Bedtime, Weinerschnitzel, Rotwein – Careful? Compound or derivation?

Morphotactics What are the ‘rules’ for constructing a word in a given language? – Pseudo-intellectual vs. *intellectual-pseudo – Rational-ize vs *ize-rational – Cretin-ous vs. *cretin-ly vs. *cretin-acious

Semantics: In English, un- cannot attach to adjectives that already have a negative connotation: – Unhappy vs. *unsad – Unhealthy vs. *unsick – Unclean vs. *undirty Phonology: In English, -er cannot attach to words of more than two syllables – great, greater – Happy, happier – Competent, *competenter – Elegant, *eleganter – Unruly, ?unrulier

Morphological Parsing These regularities enable us to create software to parse words into their component parts

Morphology and FSAs We’d like to use the machinery provided by FSAs to capture facts about morphology Ie. Accept strings that are in the language And reject strings that are not And do it in a way that doesn’t require us to in effect list all the words in the language

What do we need to build a morphological parser? Lexicon: list of stems and affixes (w/ corresponding p.o.s.) Morphotactics of the language: model of how and which morphemes can be affixed to a stem Orthographic rules: spelling modifications that may occur when affixation occurs – in  il in context of l (in- + legal) Most morphological phenomena can be described with regular expressions – so finite state techniques often used to represent morphological processes

Start Simple Regular singular nouns are ok Regular plural nouns have an -s on the end Irregulars are ok as is

Simple Rules

Now Add in the Words

Derivational morphology: adjective fragment q3 q5 q4 q0 q1q2 un- adj-root 1 -er, -ly, -est  adj-root 1 adj-root 2 -er, -est Adj-root 1 : clear, happi, real (clearly) Adj-root 2 : big, red (*bigly)

Parsing/Generation vs. Recognition We can now run strings through these machines to recognize strings in the language Accept words that are ok Reject words that are not But recognition is usually not quite what we need Often if we find some string in the language we might like to find the structure in it (parsing) Or we have some structure and we want to produce a surface form (production/generation) Example From “cats” to “cat +N +PL”

Finite State Transducers The simple story Add another tape Add extra symbols to the transitions On one tape we read “cats”, on the other we write “cat +N +PL”

Applications The kind of parsing we’re talking about is normally called morphological analysis It can either be An important stand-alone component of an application (spelling correction, information retrieval) Or simply a link in a chain of processing

FSTs Kimmo Koskenniemi’s two-level morphology Idea: word is a relationship between lexical level (its morphemes) and surface level (its orthography)

Transitions c:c means read a c on one tape and write a c on the other +N:ε means read a +N symbol on one tape and write nothing on the other +PL:s means read +PL and write an s c:ca:at:t +N:ε + PL:s

Typical Uses Typically, we’ll read from one tape using the first symbol on the machine transitions (just as in a simple FSA). And we’ll write to the second tape using the other symbols on the transitions. In general, FSTs can be used for – Translators (Hello:Ciao) – Parser/generators (Hello:How may I help you?) – As well as Kimmo-style morphological parsing

Ambiguity Recall that in non-deterministic recognition multiple paths through a machine may lead to an accept state. Didn’t matter which path was actually traversed In FSTs the path to an accept state does matter since differ paths represent different parses and different outputs will result

Ambiguity What’s the right parse (segmentation) for Unionizable Union-ize-able Un-ion-ize-able Each represents a valid path through the derivational morphology machine.

Ambiguity There are a number of ways to deal with this problem Simply take the first output found Find all the possible outputs (all paths) and return them all (without choosing) Bias the search so that only one or a few likely paths are explored

The Gory Details Of course, its not as easy as “cat +N +PL” “cats” As we saw earlier there are geese, mice and oxen But there are also a whole host of spelling/pronunciation changes that go along with inflectional changes Cats vs Dogs Fox and Foxes

Multi-Tape Machines To deal with this we can simply add more tapes and use the output of one tape machine as the input to the next So to handle irregular spelling changes we’ll add intermediate tapes with intermediate symbols

Generativity Nothing really privileged about the directions. We can write from one and read from the other or vice-versa. One way is generation, the other way is analysis

Multi-Level Tape Machines We use one machine to transduce between the lexical and the intermediate level, and another to handle the spelling changes to the surface tape

Lexical to Intermediate Level

Intermediate to Surface The add an “e” rule as in fox^s# foxes#

Foxes

Note A key feature of this machine is that it doesn’t do anything to inputs to which it doesn’t apply. Meaning that they are written out unchanged to the output tape.

Overall Scheme We now have one FST that has explicit information about the lexicon (actual words, their spelling, facts about word classes and regularity). Lexical level to intermediate forms We have a larger set of machines that capture orthographic/spelling rules. Intermediate forms to surface forms

Overall Scheme

Cascades This is a scheme that we’ll see again and again. Overall processing is divided up into distinct rewrite steps The output of one layer serves as the input to the next The intermediate tapes may or may not wind up being useful in their own right

Porter Stemmer Porter Stemmer (1980) Used for tasks in which you only care about the stem – IR, modeling given/new distinction, topic detection, document similarity Lexicon-free morphological analysis Cascades rewrite rules (e.g. misunderstanding --> misunderstand --> understand --> …) Easily implemented as an FST with rules e.g. – ATIONAL  ATE – ING  ε Not perfect …. – Doing  doe

Policy  police Does stemming help? – IR, little – Topic detection, more

Summing Up FSTs provide a useful tool for implementing a standard model of morphological analysis, Kimmo’s two-level morphology But for many tasks (e.g. IR) much simpler approaches are still widely used, e.g. the rule-based Porter Stemmer Next time: – Read Ch 4 HW1 assigned; see web page: