Overview of the Language Technologies Institute and AVENUE Project Jaime Carbonell, Director March 2, 2002.

Slides:



Advertisements
Similar presentations
The Application of Machine Translation in CADAL Huang Chen, Chen Haiying Zhejiang University Libraries, Hangzhou, China
Advertisements

The Chinese Room: Understanding and Correcting Machine Translation This work has been supported by NSF Grants IIS Solution: The Chinese Room Conclusions.
Statistical Machine Translation Part II – Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Proceedings of the Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2007) Learning for Semantic Parsing Advisor: Hsin-His.
Improving Machine Translation Quality via Hybrid Systems and Refined Evaluation Methods Andreas Eisele DFKI GmbH and Saarland University Helsinki, November.
INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING NLP-AI IIIT-Hyderabad CIIL, Mysore ICON DECEMBER, 2003.
Languages & The Media, 4 Nov 2004, Berlin 1 Multimodal multilingual information processing for automatic subtitle generation: Resources, Methods and System.
Search Engines and Information Retrieval
Natural Language Processing AI - Weeks 19 & 20 Natural Language Processing Lee McCluskey, room 2/07
NICE: Native language Interpretation and Communication Environment Lori Levin, Jaime Carbonell, Alon Lavie, Ralf Brown Carnegie Mellon University.
Automatic Rule Learning for Resource-Limited Machine Translation Alon Lavie, Katharina Probst, Erik Peterson, Jaime Carbonell, Lori Levin, Ralf Brown Language.
Resources Primary resources – Lexicons, structured vocabularies – Grammars (in widest sense) – Corpora – Treebanks Secondary resources – Designed for a.
Machine Translation with Scarce Resources The Avenue Project.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Low-Density Languages Katharina Probst April 5, 2002.
Language Technologies Institute School of Computer Science Carnegie Mellon University NSF August 6, 2001 NICE: Native language Interpretation and Communication.
Level 2 IT Users Qualification – Unit 1 Improving Productivity Name.
Search is not only about the Web An Overview on Printed Documents Search and Patent Search Walid Magdy Centre for Next Generation Localisation School of.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 1 21 July 2005.
Search Engines and Information Retrieval Chapter 1.
Lecture 12: 22/6/1435 Natural language processing Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Some Thoughts on HPC in Natural Language Engineering Steven Bird University of Melbourne & University of Pennsylvania.
Profile The METIS Approach Future Work Evaluation METIS II Architecture METIS II, the continuation of the successful assessment project METIS I, is an.
1 Computational Linguistics Ling 200 Spring 2006.
Area Report Machine Translation Hervé Blanchon CLIPS-IMAG A Roadmap for Computational Linguistics COLING 2002 Post-Conference Workshop.
SIG IL 2000 Evaluation of a Practical Interlingua for Task-Oriented Dialogue Lori Levin, Donna Gates, Alon Lavie, Fabio Pianesi, Dorcas Wallace, Taro Watanabe,
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
Can Controlled Language Rules increase the value of MT? Fred Hollowood & Johann Rotourier Symantec Dublin.
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
Carnegie Mellon School of Computer Science Copyright © 2001, Carnegie Mellon. All Rights Reserved. JAVELIN Project Briefing 1 AQUAINT Phase I Kickoff December.
Introduction to the Language Technologies Institute Fall, 2008 Jaime Carbonell
LTI Education Committee Report Alon Lavie LTI Retreat March 2, 2012.
Advanced MT Seminar Spring 2008 Instructors: Alon Lavie and Stephan Vogel.
Big6 Overview Big6™ Trainers Program McDowell County Schools.
Multi-Engine MT for Quick MT. Missing Technology for Quick MT LingWear ISI MT NICE Core Rapid MT - Multi-Engine MT - Omnivorous resource usage - Pervasive.
Approaches to Machine Translation CSC 5930 Machine Translation Fall 2012 Dr. Tom Way.
Research Topics CSC Parallel Computing & Compilers CSC 3990.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
AVENUE Automatic Machine Translation for low-density languages Ariadna Font Llitjós Language Technologies Institute SCS Carnegie Mellon University.
MT with an Interlingua Lori Levin April 13, 2009.
Carnegie Mellon Goal Recycle non-expert post-editing efforts to: - Refine translation rules automatically - Improve overall translation quality Proposed.
October 2005CSA3180 NLP1 CSA3180 Natural Language Processing Introduction and Course Overview.
Data Collection and Language Technologies for Mapudungun Lori Levin, Rodolfo Vega, Jaime Carbonell, Ralf Brown, Alon Lavie Language Technologies Institute.
Overview of the Language Technologies Institute and AVENUE Project Jaime Carbonell, Director March 2, 2002.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
Computational linguistics A brief overview. Computational Linguistics might be considered as a synonym of automatic processing of natural language, since.
Next Generation Search Engines Ehsun Daroodi 1 Feb, 2003.
Designing a Machine Translation Project Lori Levin and Alon Lavie Language Technologies Institute Carnegie Mellon University CATANAL Planning Meeting Barrow,
Information Retrieval
For Friday Finish chapter 23 Homework –Chapter 23, exercise 15.
1 Class exercise II: Use Case Implementation Deborah McGuinness and Peter Fox CSCI Week 8, October 20, 2008.
Language Technologies Institute School of Computer Science Carnegie Mellon University NSF, August 6, 2001 Machine Translation for Indigenous Languages.
Rapid Development in new languages Limited training data (6hrs) provided by NECTEC from 34 speakers, + 8 spks for development and test Romanization of.
NICE: Native Language Interpretation and Communication Environment Lori Levin, Jaime Carbonell, Alon Lavie, Ralf Brown, Erik Peterson, Katharina Probst,
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies.
Language Technologies Capability Demonstration Alon Lavie, Lori Levin, Alex Waibel Language Technologies Institute Carnegie Mellon University CATANAL Planning.
Avenue Architecture Learning Module Learned Transfer Rules Lexical Resources Run Time Transfer System Decoder Translation Correction Tool Word- Aligned.
October 10, 2003BLTS Kickoff Meeting1 Transfer with Strong Decoding Learning Module Transfer Rules {PP,4894} ;;Score: PP::PP [NP POSTP] -> [PREP.
1 February 2012 ILCAA, TUFS, Tokyo program David Nathan and Peter Austin Hans Rausing Endangered Languages Project SOAS, University of London Language.
CMU MilliRADD Small-MT Report TIDES PI Meeting 2002 The CMU MilliRADD Team: Jaime Carbonell, Lori Levin, Ralf Brown, Stephan Vogel, Alon Lavie, Kathrin.
AVENUE: Machine Translation for Resource-Poor Languages NSF ITR
Developing affordable technologies for resource-poor languages Ariadna Font Llitjós Language Technologies Institute Carnegie Mellon University September.
FROM BITS TO BOTS: Women Everywhere, Leading the Way Lenore Blum, Anastassia Ailamaki, Manuela Veloso, Sonya Allin, Bernardine Dias, Ariadna Font Llitjós.
A Simple English-to-Punjabi Translation System By : Shailendra Singh.
King Faisal University جامعة الملك فيصل Deanship of E-Learning and Distance Education عمادة التعلم الإلكتروني والتعليم عن بعد [ ] 1 جامعة الملك فيصل عمادة.
Background of the NICE Project Lori Levin Jaime Carbonell Alon Lavie Ralf Brown.
LingWear Language Technology for the Information Warrior Alex Waibel, Lori Levin Alon Lavie, Robert Frederking Carnegie Mellon University.
Multi-Engine Machine Translation
Approaches to Machine Translation
Approaches to Machine Translation
Presentation transcript:

Overview of the Language Technologies Institute and AVENUE Project Jaime Carbonell, Director March 2, 2002

School of Computer Science at Carnegie Mellon University Computer Science Department (theory, systems) Robotics Institute (space, industry, medical) Language Technologies Institute (MT, speech, IR) Human-Computer Interaction Inst. (Ergonomics) Institute for Software Research Int. (SE) Center for Automated Learning & Disc (DM) Entertainment Technologies (Animation, graphics)

Language Technologies Institute Founded in 1986 as the Center for Machine Translation (CMT). Became Language Technologies Institute in 1996, unifying CMT, Comp Ling program. Current Size: 110 FTEs –18 Faculty –22 Staff –60 Graduate Students (45 PhD, 15 MLT) –10 Visiting Scholars

Bill of Rights rightGet the right information To the right people At the right time On the right medium In the right language With the right level of detail

“The Right Information” Find the right papers, web-pages, … –Language modeling for IR (Lafferty, Callan) –Translingual IR (Yang, Carbonell, Brown) –Distributed IR (Callan) Seek Novelty (Carbonell, Yang, …) –Avoid massive redundancy –Detect new events in streaming data

“…to the Right People” Text Categorization –Multi-class classifiers by topic (Yang) –Boosting for genre learning (Carbonell) Filtering & Routing –Topic tracking in streaming data (Yang) –TREC filtering/routing (Callan, Yang)

“…at the Right Time” I.e. when the information is needed Anticipatory analysis –Helpful info without being asked Context-aware learning –Interactivity with user –Utility theory (when to ask, when to give new or deeper info, when to back off) (We have not yet taken up this challenge)

“…on the Right Medium” Speech Recognition –SPHINX (Reddy, Rudnicky Rosenfeld, …) –JANUS (Waibel, Schultz, …) Speech Synthesis –Festival (Black, Lenzo) Handwriting & Gesture Recognition –ISL (Waibel, J. Yang) Multimedia Integration (CSD) –Informedia (Wactlar, Hauptmann, …)

“… in the Right Language” High-Accuracy Interlingual MT –KANT (Nyberg, Mitamura) Parallel Corpus-Trainable MT –Statistical MT (Lafferty, Vogel) –Example-Based MT (Brown, Carbonell) –AVENUE Instructible MT (Levin, Lavie, Carbonell) Speech-to-speech MT –JANUS/DIPLOMAT/AVENUE (Waibel, Frederking, Levin, Schultz, Vogel, Lafferty, Black, …)

“…at the Right Level of Detail” Multidocument Summarization (Carbonell, Waibel, Yang, …) Question Answering (Carbonell, Callan, Nyberg, Mitamura, Lavie, …) –New thrust (JAVELIN project) –Combines Q-analysis, IR, extraction, planning, user-feedback, utility analysis, answer synthesis, …

We also Engage in: Tutoring Systems (Eskenazi, Callan) Linguistic Analysis (Levin, Mitamura…) Robust Parsing Algorithms (Lavie, …) Interface & communication language design (Rosenfeld, Waibel, Rudnicky) Complex System Design (Nyberg, Callan) Machine Learning (Carbonell, Lafferty, Yang, Rosenfeld, Lavie, …)

How we do it at LTI Data-driven methods –Statistical learning –Corpora-based Examples: –Statistical MT –Example-based MT –Text categorization –Novelty detection –Translingual IR Knowledge-based –Symbolic learning –Linguistic analysis –Knowledge represent. Examples: –Interlingual MT –Parsing & generation –Discourse modeling –Language tutoring

Hot Research Topics Automated Q/A from web/text (JAVELIN) Endangered Language MT (AVENUE) Novelty detection and tracking (TDT) Theoretical foundations of Language modeling, and knowledge discovery (All require multi-discipline approach.)

Educational Programs at LTI PhD Program –45 PhD students, all research areas of LTI –Individual and joint advisorships –“Marriage” process in mid-September to match faculty/projects with new students –Years % research, 50% courses –Years 3-N => 100% research (target: N=5) –Semi-annual student evaluations

Education at LTI (II) MLT Program (1-2 years) –Courses are more central –50% on Project/research work (if funded) –Many MLTs apply for PhD admission CALL Masters (1 year) –New program joint with Modern Languages Certificate program (1 semester)

The AVENUE Project: Machine Translation and Language Tools for Minority Languages Jaime Carbonell, Lori Levin, Alon Lavie, Tanja Schultz, Eric Petersen, Kathrin Probst, Christian Monson, …

Machine Translation of Indigenous Languages Policy makers have access to information about indigenous people. –Epidemics, crop failures, etc. Indigenous people can participate in –Health care –Education –Government –Internet without giving up their languages.

History of AVENUE Arose from a series of joint workshops of NSF and OAS. Workshop recommendations: –Create multinational projects using information technology to: provide immediate benefits to governments and citizens develop critical infrastructure for communication and collaborative research –training researchers and engineers –advancing science and technology

Resources for MT People who speak the language. Linguists who speak the language. Computational linguists who speak the language. Text on paper. Text on line. Comparable text on paper or on line. Parallel text on paper or on line. Annotated text (part of speech, morphology, etc.) Dictionaries (mono-lingual or bilingual) on paper or on line. Recordings of spoken language. Recordings of spoken language that are transcribed. Etc.

MT for Indigenous Languages Minimal amount of parallel text Possibly competing standards for orthography/spelling Maybe not so many trained linguists Access to native informants possible Need to minimize development time and cost

Two Technical Approaches Generalized EBMT Parallel text 50K-2MB (uncontrolled corpus) Rapid implementation Proven for major L’s with reduced data Transfer-rule learning Elicitation (controlled) corpus to extract grammatical properties Seeded version-space learning

Types of Machine Translation Interlingua Syntactic Parsing Semantic Analysis Sentence Planning Text Generation Source (Arabic) Target (English) Transfer Rules Direct: SMT, EBMT

Multi-Engine Machine Translation MT Systems have different strengths –Rapidly adaptable: Statistical, example-based –Good grammar: Rule-Based (linguisitic) MT –High precision in narrow domains: KBMT –Minority Language MT: Learnable from informant Combine results of parallel-invoked MT –Select best of multiple translations –Selection based on optimizing combination of: Target language joint-exponential model Confidence scores of individual MT engines

Illustration of Multi-Engine MT El punto de descarge The drop-off point se cumplirá en will comply with el puente Agua Fria The cold Bridgewater El punto de descarge The discharge point se cumplirá en will self comply in el puente Agua Fria the “Agua Fria” bridge El punto de descarge Unload of the point se cumplirá en will take place at el puente Agua Fria the cold water of bridge

EBMT Example English: I would like to meet her. Mapudungun: Ayükefun trawüael fey engu. English: The tallest man is my father. Mapudungun: Chi doy fütra chi wentru fey ta inche ñi chaw. English: I would like to meet the tallest man Mapudungun (new): Ayükefun trawüael Chi doy fütra chi wentru Mapudungun (correct): Ayüken ñi trawüael chi doy fütra wentruengu.

Architecture Diagram User Learning Module Elicitation Process SVS Learning Process Transfer Rules Run-Time Module SL Input SL Parser Transfer Engine TL Generator EBMT Engine Unifier Module TL Output

Version Space Learning Symbolic learning from + and – examples Invented by Mitchell, refined by Hirsch Builds generalization lattice implicitly Bounded by G and S sets Worse-case exponential complexity (in size of G and S) Slow convergence rate

Example of Transfer Rule Lattice

Seeded Version Spaces Generate concept seed from first + example –Generalization-level hypothesis (POS + feature agreement for T-rules in NICE) Generalization/specialization level bounds –Up to k-levels generalization, and up to j-levels specialization. Implicit lattice explored seed-outwards

Complexity of SVS O(g k ) upward search, where g = # of generalization operators O(s j ) downward search, where s = # of specialization operators Since m and k are constants, the SVS runs in polynomial time of order max(j,k) Convergence rates bounded by F(j,k)

Next Steps in SVS Implementation of transfer-rule intepreter (partially complete) Implementation of SVS to learn transfer rules (underway) Elicitation corpus extension for evaluation (under way) Evaluation first on Mapudungun MT (next)

NICE Partners LanguageCountryInstitutions Mapudungun (in place) Chile Universidad de la Frontera, Institute for Indigenous Studies, Ministry of Education Iñupiaq (advanced discussion) US (Alaska) Ilisagvik College, Barrow school district, Alaska Rural Systemic Initiative, Trans-Arctic and Antarctic Institute, Alaska Native Language Center Siona (discussion) Colombia OAS-CICAD, Plante, Department of the Interior

Agreement Between LTI and Institute of Indigenous Studies (IEI), Universidad De La Frontera, Chile Contributions of IEI –Native language knowledge and linguistic expertise in Mapudungun –Experience in bicultural, bilingual education –Data collection: recording, transcribing, translating –Orthographic normalization of Mapudungun

Agreement between LTI and Institute of Indigenous Studies (IEI), Universidad de la Frontera, Chile Contributions of LTI –Develop MT technology for indigenous languages –Training for data collection and transcription –Partial support for data collection effort pending funding from Chilean Ministry of Education –International coordination, technical and project management

LTI/IEI Agreement Continue collaboration on data collection and machine translation technology. Pursue focused areas of mutual interest, such as bilingual education. Seek additional funding sources in Chile and the US.

The IEI Team Coordinator (leader of a bilingual and multicultural education project): –Eliseo Canulef Distinguished native speaker: –Rosendo Huisca Linguists (one native speaker, one near-native) –Juan Hector Painequeo –Hugo Carrasco Typists/Transcribers Recording assistants Translators Native speaker linguistic informants

MINEDUC/IEI Agreement Highlights: Based on the LTI/IEI agreement, the Chilean Ministry of Education agreed to fund the data collection and processing team for the year This agreement will be renewed each year, as needed.

MINEDUC/IEI Agreement: Objectives  To evaluate the NICE/Mapudungun proposal for orthography and spelling  To collect an oral corpus that represent the four Mapudungun dialects spoken in Chile. The main domain is primary health, traditional and western.

MINEDUC/IEI Agreement: Deliverables  An oral corpus of 800 hours recorded, proportional to the demography of each current spoken dialect  120 hours transcribed and translated from Mapudungun to Spanish  A refined proposal for writing Mapudungun

Nice/Mapudungun: Database Writing conventions (Grafemario) Glossary Mapudungun/Spanish Bilingual newspaper, 4 issues Ultimas Familias –memoirs Memorias de Pascual Coña –Publishable product with new Spanish translation 35 hours transcribed speech 80 hours recorded speech`

NICE/Mapudungun: Other Products Standardization of orthography: Linguists at UFRO have evaluated the competing orthographies for Mapudungun and written a report detailing their recommendations for a standardized orthography for NICE. Training for spoken language collection: In January 2001 native speakers of Mapudungun were trained in the recording and transcription of spoken data.

Underfunded Activities Data collection –Colombia (unfunded) –Chile (partially funded) Travel –More contact between CMU and Chile (UFRO) and Colombia. Training –Train Mapuche linguists in language technologies at CMU. –Extend training to Colombia Refine MT system for Mapudungun and Siona –Current funding covers research on the MT engine and data collection, but not detailed linguistic analysis

Outline History of MT--See Wired magazine May 2000 issue. Available on the web. How well does it work? Procedure for designing an LT project. Choose an application: What do you want to do? Identify the properties of your application. Methods: knowledge-based, statistical/corpus based, or hybrid. Methods: interlingua, transfer, direct Typical components of an MT system. Typical resources required for and MT system.

How well does it work? Example: SpanAm Possibly the best Spanish-English MT system. Around 20 years of development.

How well does it work? Example: Systran Try it on the Altavista web page. Many language pairs are available. Some language pairs might have taken up to a person-century of development. Can translate text on any topic. Results may be amusing.

How well does it work? Example: KANT Translates equipment manuals for Caterpillar. Input is controlled English: many ambiguities are eliminated. The input is checked carefully for compliance with the rules. Around 5 output languages. The output might be post-edited. The result has to be perfect to prevent accidents with the equipment.

How well does it work? Example: JANUS Translates spoken conversations about booking hotel rooms or flights. Six languages: English, French, German, Italian, Japanese, Korean (with partners in the C-STAR consortium). Input is spontaneous speech spoken into a microphone. Output is around 60% correct. Task Completion is higher than translation accuracy: users can always get their flights or rooms if they are willing to repeat 40% of their sentences.

How well does it work? Speech Recognition Jupiter weather information: You can say things like “what cities do you know about in Chile?” and “What will be the weather tomorrow in Santiago?”. Communicator flight reservations: CMU-PLAN. You can say things like “I’m travelling to Pittsburgh.” Speechworks demo: SAY-DEMO. You can say things like “Sell my shares of Microsoft.” These are all in English, and are toll-free only in the US, but they are speaker-indepent and should work with reasonable foreign accents.

Different kinds of MT Different applications: for example, translation of spoken language or text. Different methods: for example, translation rules that are hand crafted by a linguist or rules that are learned automatically by a machine. The work of building an MT program will be very different depending on the application and the methods.

Procedure for planning an MT project Choose an application. Identify the properties of your application. List your resources. Choose one or more methods. Make adjustments if your resources are not adequate for the properties of your application.

Choose an application: What do you want to do? Exchange or chat in Quechua and Spanish. Translate Spanish web pages about science into Quechua so that kids can read about science in their language. Scan the web: “Is there any information about such-and- such new fertilizer and water pollution?” Then if you find something that looks interesting, take it to a human translator. Answer government surveys about health and agriculture (spoken or written). Ask directions (“where is the library?”) (spoken). Read government publications in Quechua.

Identify the properties of your application. Do you need reliable, high quality translation? How many languages are involved? Two or more? Type of input. One topic (for example, weather reports) or any topic (for example, calling your friend on the phone to chat). Controlled or free input. How much time and money do you have? Do you anticipate having to add new topics or new languages?

Do you need high quality? Assimilation: Translate something into your language so that you can: –understand it--may not require high quality. –evaluate whether it is important or interesting and then send it off for a better translation-- does not require high quality. –use it for educational purposes--probably requires high quality.

Do you need high quality? Dissemination: Translate something into someone else’s language e.g., for publication. Usually should be high quality.

Do you need high quality? Two-Way: e.g., chat room or spoken conversation May not require high reliability on correctness if you have a native language paraphrase. –Original input : I would like to reserve a double room. –Paraphrase: Could you make a reservation for a double room.

Type of Input Formal text: newspaper, government reports, on- line encyclopedia. –Difficulty: long sentences Formal speech: spoken news broadcast. –Difficulty: speech recognition won’t be perfect. Conversational speech: –Difficulty: speech recognition won’t be perfect –Difficulty: disfluencies –Difficulty: non-grammatical speech Informal text: , chat –Difficulty: non-grammatical speech

Methods: Knowledge-Based Knowledge-based MT: a linguist writes rules for translation: –noun adjective --> adjective noun Requires a computational linguist who knows the source and target languages. Usually takes many years to get good coverage. Usually high quality.

Methods: statistical/corpus-based Statistical and corpus-based methods involve computer programs that automatically learn to translate. The program must be trained by showing it a lot of data. Requires huge amounts of data. The data may need to be annotated by hand. Does not require a human computational linguist who knows the source and target languages. Could be applied to a new language in a few days. At the current state-of-the-art, the quality is not very good.

Methods: Interlingua An interlingua is a machine-readable representation of the meaning of a sentence. –I’d like a double room/Quisiera una habitacion doble. –request-action+reservation+hotel(room-type=double) Good for multi-lingual situations. Very easy to add a new language. Probably better for limited domains -- meaning is very hard to define.

Multilingual Interlingual Machine Translation Instructions: Delete sample document icon and replace with working document icons as follows: Create document in Word. Return to PowerPoint. From Insert Menu, select Object… Click “Create from File” Locate File name in “File” box Make sure “Display as Icon” is checked. Click OK Select icon From Slide Show Menu, Select Action Settings. Click “Object Action” and select “Edit” Click OK

Methods: Transfer A transfer rule tells you how a structure in one language corresponds to a different structure in another language: –an adjective followed by a noun in English corresponds to a noun followed by an adjective in Spanish. Not good when there are more than two languages -- you have to write different transfer rules for each pair. Better than interlingua for unlimited domain.

Methods: Direct Direct translation does not involve analyzing the structure or meaning of a language. For example, look up each word in a bilingual dictionary. Results can be hilarious: “the spirit is willing but the flesh is weak” can become “the wine is good, but the meat is lousy.” Can be developed very quickly. Can be a good back-up when more complicated methods fail to produce output.

Components of a Knowledge- Based Interlingua MT System Morphological analyzer: identify prefixes, suffixes, and stem. Parser (sentence-to-syntactic structure for source language, hand-written or automatically learned) Meaning interpreter (syntax-to-semantics, source language). Meaning interpreter (semantics-to-syntax, target language). Generator (syntactic structure-to-sentence) for target language.

Resources for a knowledge-based interlingua MT system Computational linguists who know the source and target languages. As large a corpus as possible so that the linguists can confirm that they are covering the necessary constructions, but the size of the corpus is not crucial to system development. Lexicons for source and target languages, syntax, semantics, and morphology. A list of all the concepts that can be expressed in the system’s domain.

Components of Example Based MT: a direct statistical method A morphological analyzer and part of speech tagger would be nice, but not crucial. An alignment algorithm that runs over a parallel corpus and finds corresponding source and target sentences. An algorithm that compares an input sentence to sentences that have been previously translated, or whose translation is known. An algorithm that pulls out the corresponding translation, possibly slightly modifying a previous translation.

Resources for Example Based MT Lexicons would improve quality of translation, but are not crucial. A large parallel corpus (hundreds of thousands of words).

“Omnivorous” Multi-Engine MT: eats any available resources

Approaches we had in mind Direct bilingual-dictionary lookup: because it is easy and is a back-up when other methods fail. Generalized Example-Based MT: because it is easy and fast and can be also be a back-up. Instructable Transfer-based MT: a new, untested idea involving machine learning of rules from a human native speaker. Useful when computational linguists don’t know the language, and people who know the language are not computational linguists. Conventional, hand-written transfer rules: in case the new method doesn’t work.