1 David Chen Advisor: Raymond Mooney Research Preparation Exam August 21, 2008 Learning to Sportscast: A Test of Grounded Language Acquisition.

Slides:



Advertisements
Similar presentations
Statistical Machine Translation
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
The Chinese Room: Understanding and Correcting Machine Translation This work has been supported by NSF Grants IIS Solution: The Chinese Room Conclusions.
1 CS 388: Natural Language Processing: N-Gram Language Models Raymond J. Mooney University of Texas at Austin.
Using Closed Captions to Train Activity Recognizers that Improve Video Retrieval Sonal Gupta and Raymond Mooney University of Texas at Austin.
Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser Institute for Natural Language Processing University of Stuttgart
Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser ICL, U. Heidelberg CIS, LMU München Statistical Machine Translation.
Statistical Machine Translation Part II – Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Learning Semantic Parsers Using Statistical.
Proceedings of the Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2007) Learning for Semantic Parsing Advisor: Hsin-His.
1 Learning Language from its Perceptual Context Ray Mooney Department of Computer Sciences University of Texas at Austin Joint work with David Chen Joohyun.
For Monday Read Chapter 23, sections 3-4 Homework –Chapter 23, exercises 1, 6, 14, 19 –Do them in order. Do NOT read ahead.
Advanced AI - Part II Luc De Raedt University of Freiburg WS 2004/2005 Many slides taken from Helmut Schmid.
Predicting Text Quality for Scientific Articles Annie Louis University of Pennsylvania Advisor: Ani Nenkova.
Natural Language Processing AI - Weeks 19 & 20 Natural Language Processing Lee McCluskey, room 2/07
1 Noun Homograph Disambiguation Using Local Context in Large Text Corpora Marti A. Hearst Presented by: Heng Ji Mar. 29, 2004.
Minimum Error Rate Training in Statistical Machine Translation By: Franz Och, 2003 Presented By: Anna Tinnemore, 2006.
Statistical Natural Language Processing Advanced AI - Part II Luc De Raedt University of Freiburg WS 2005/2006 Many slides taken from Helmut Schmid.
1 The Web as a Parallel Corpus  Parallel corpora are useful  Training data for statistical MT  Lexical correspondences for cross-lingual IR  Early.
1 Statistical NLP: Lecture 13 Statistical Alignment and Machine Translation.
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
1 Learning to Interpret Natural Language Navigation Instructions from Observation Ray Mooney Department of Computer Science University of Texas at Austin.
1 Learning Natural Language from its Perceptual Context Ray Mooney Department of Computer Science University of Texas at Austin Joint work with David Chen.
Wilma Bainbridge Tencia Lee Kendra Leigh
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 1 21 July 2005.
For Friday Finish chapter 23 Homework: –Chapter 22, exercise 9.
Machine Learning Group Department of Computer Sciences University of Texas at Austin Learning Language Semantics from Ambiguous Supervision Rohit J. Kate.
Exploiting Ontologies for Automatic Image Annotation M. Srikanth, J. Varner, M. Bowden, D. Moldovan Language Computer Corporation
David L. Chen Supervisor: Professor Raymond J. Mooney Ph.D. Dissertation Defense January 25, 2012 Learning Language from Ambiguous Perceptual Context.
David Chen Advisor: Raymond Mooney Research Preparation Exam August 21, 2008 Learning to Sportscast: A Test of Grounded Language Acquisition.
The Impact of Grammar Enhancement on Semantic Resources Induction Luca Dini Giampaolo Mazzini
1 Computational Linguistics Ling 200 Spring 2006.
Learning to Transform Natural to Formal Language Presented by Ping Zhang Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney.
David L. Chen Fast Online Lexicon Learning for Grounded Language Acquisition The 50th Annual Meeting of the Association for Computational Linguistics (ACL)
1 Using Perception to Supervise Language Learning and Language to Supervise Perception Ray Mooney Department of Computer Sciences University of Texas at.
1 Semi-Supervised Approaches for Learning to Parse Natural Languages Rebecca Hwa
NUDT Machine Translation System for IWSLT2007 Presenter: Boxing Chen Authors: Wen-Han Chao & Zhou-Jun Li National University of Defense Technology, China.
A Bootstrapping Method for Building Subjectivity Lexicons for Languages with Scarce Resources Author: Carmen Banea, Rada Mihalcea, Janyce Wiebe Source:
David L. Chen and Raymond J. Mooney Department of Computer Science The University of Texas at Austin Learning to Interpret Natural Language Navigation.
1 David Chen & Raymond Mooney Department of Computer Sciences University of Texas at Austin Learning to Sportscast: A Test of Grounded Language Acquisition.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Learning for Semantic Parsing Raymond.
Improving Named Entity Translation Combining Phonetic and Semantic Similarities Fei Huang, Stephan Vogel, Alex Waibel Language Technologies Institute School.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Learning a Compositional Semantic Parser.
For Friday Finish chapter 23 Homework –Chapter 23, exercise 15.
Multi-level Bootstrapping for Extracting Parallel Sentence from a Quasi-Comparable Corpus Pascale Fung and Percy Cheung Human Language Technology Center,
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
Natural Language Generation with Tree Conditional Random Fields Wei Lu, Hwee Tou Ng, Wee Sun Lee Singapore-MIT Alliance National University of Singapore.
Wei Lu, Hwee Tou Ng, Wee Sun Lee National University of Singapore
Machine Learning Group Department of Computer Sciences University of Texas at Austin Learning for Semantic Parsing with Kernels under Various Forms of.
Combining Text and Image Queries at ImageCLEF2005: A Corpus-Based Relevance-Feedback Approach Yih-Cheng Chang Department of Computer Science and Information.
David Chen Supervising Professor: Raymond J. Mooney Doctoral Dissertation Proposal December 15, 2009 Learning Language from Perceptual Context 1.
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Department of Computer Science The University of Texas at Austin USA Joint Entity and Relation Extraction using Card-Pyramid Parsing Rohit J. Kate Raymond.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
A Simple English-to-Punjabi Translation System By : Shailendra Singh.
1 Learning Language from its Perceptual Context Ray Mooney Department of Computer Sciences University of Texas at Austin Joint work with David Chen Rohit.
Neural Machine Translation
Statistical Machine Translation Part II: Word Alignments and EM
Semantic Parsing for Question Answering
Using String-Kernels for Learning Semantic Parsers
Learning to Transform Natural to Formal Languages
Joohyun Kim Supervising Professor: Raymond J. Mooney
Learning to Sportscast: A Test of Grounded Language Acquisition
Learning a Policy for Opportunistic Active Learning
Using Natural Language Processing to Aid Computer Vision
Presentation transcript:

1 David Chen Advisor: Raymond Mooney Research Preparation Exam August 21, 2008 Learning to Sportscast: A Test of Grounded Language Acquisition

2 Semantics of Language The meaning of words, phrases, etc Crucial in communications Example: “Spanish goalkeeper Iker Casillas blocks the ball” –Merriam-Webster: (transitive verb) to interfere usually legitimately with (as an opponent) in various games or sports –WordNet: (v) parry, deflect

3 3 Language Grounding Problem: We are circularly defining the meanings of words in terms of other words. The meanings of many words are grounded in our perception of the physical world: red, ball, cup, run, hit, fall, etc. –Symbol Grounding: Harnad (1990) Even many abstract words and meanings are metaphorical abstractions of terms grounded in the physical world: up, down, over, in, etc. –Lakoff and Johnson’s Metaphors We Live By It’s difficult to put my ideas into words. Interest in competitions is up.

4 Grounding Language Casillas blocks the ball

5 Grounding Language Casillas blocks the ball Block(Casillas)

6 Natural Language and Meaning Representation Natural Language (NL) NL: A language that has evolved naturally, such as English, German, French, Chinese, etc MRL: Formal languages such as logic or any computer-executable code Meaning Representation Language (MRL) Casillas blocks the ball Block(Casillas)

7 Semantic Parsing and Tactical Generation NL Semantic Parsing: maps a natural-language sentence to a complete, detailed semantic representation Tactical Generation: Generates a natural-language sentence from a meaning representation. MRL Semantic Parsing (NL  MRL) Tactical Generation (NL  MRL) Casillas blocks the ball Block(Casillas)

8 Learning Approach Manually Annotated Training Corpora (NL/MRL pairs) Semantic Parser MRLNL Semantic Parser Learner

9 Learning Approach Manually Annotated Training Corpora (NL/MRL pairs) Tactical Generator MRLNL Tactical Generator Learner

10 Example of Annotated Training Corpus Alice passes the ball to Bob Bob turns the ball over to John John passes to Fred Fred shoots for the goal Paul blocks the ball Paul kicks off to Nancy … Pass(Alice, Bob) Turnover(Bob, John) Pass(John, Fred) Kick(Fred) Block(Paul) Pass(Paul, Nancy) … Natural Language (NL) Meaning Representation Language (MRL)

11 Example of Annotated Training Corpus Alice passes the ball to Bob Bob turns the ball over to John John passes to Fred Fred shoots for the goal Paul blocks the ball Paul kicks off to Nancy … P1(C1, C2) P2(C2, C3) P1(C3, C4) P3(C4) P4(C5) P5(C5, C6) … Natural Language (NL) Meaning Representation Language (MRL)

12 Learning Language from Perceptual Context Constructing annotated corpora for language learning is difficult Children acquire language through exposure to linguistic input in the context of a rich, relevant, perceptual environment Ideally, a computer system can learn language in the same manner

13 Goals Learn to ground the semantics of language Block Learn language through correlated linguistic and visual inputs

14 Challenge “ 西班牙守門員 擋下了球 ”

15 Challenge A linguistic input may correspond to many possible events ? ? ? “ 西班牙守門員 擋下了球 ”

16 Challenge A linguistic input may correspond to many possible events ? ? ? Pass(GermanyPlayer1, GermanyPlayer2) Kick(GermanyPlayer2) Block(SpanishGoalie) “ 西班牙守門員 擋下了球 ”

17 Overview Sportscasting task Related works Tactical generation Strategic generation Human evaluation

18 Learning to Sportscast Robocup Simulation League games No speech recognition –Record commentaries in text form No computer vision –Ruled-based system to automatically extract game events in symbolic form Concentrate on linguistic issues

19 Robocup Simulation League Purple goalie blocked the ball

20 Learning to Sportscast Learn to sportscast by observing sample human sportscasts Build a function that maps between natural language (NL) and meaning representation (MR) –NL: Textual commentaries about the game –MR: Predicate logic formulas that represent events in the game

21 Robocup Sportscaster Trace Natural Language CommentaryMeaning Representation Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Purple team is very sloppy today Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 )

22 Robocup Sportscaster Trace Natural Language CommentaryMeaning Representation Purple goalie turns the ball over to Pink8 P6 ( C1, C19 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Purple team is very sloppy today Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 P5 ( C1, C19 ) P2 ( C22, C19 ) P2 ( C19, C22 ) P0 P2 ( C19, C22 ) P1 ( C22 ) P1( C19 ) P1 ( C22 ) P1 ( C19 )

23 Robocup Data Collected human textual commentary for the 4 Robocup championship games from –Avg # events/game = 2,613 –Avg # sentences/game = 509 Each sentence matched to all events within previous 5 seconds. –Avg # MRs/sentence = 2.5 (min 1, max 12) Manually annotated with correct matchings of sentences to MRs (for evaluation purposes only).

24 Overview Sportscasting task Related works Tactical generation Strategic generation Human evaluation

25 Semantic Parser Learners Learn a function from NL to MR NL: “Purple3 passes the ball to Purple5” MR: Pass ( Purple3, Purple5 ) Semantic Parsing (NL  MR) Tactical Generation (MR  NL) We experiment with two semantic parser learners –WASP (Wong & Mooney, 2006; 2007) –KRISP (Kate & Mooney, 2006)

26 Uses statistical machine translation techniques –Synchronous context-free grammars (SCFG) [Wu, 1997; Melamed, 2004; Chiang, 2005] –Word alignments [Brown et al., 1993; Och & Ney, 2003] Capable of both semantic parsing and tactical generation WASP: Word Alignment-based Semantic Parsing

27 KRISP: Kernel-based Robust Interpretation by Semantic Parsing Productions of MR language are treated like semantic concepts SVM classifier is trained for each production with string subsequence kernel These classifiers are used to compositionally build MRs of the sentences More resistant to noisy supervision but incapable of tactical generation

28 KRISPER: KRISP with EM-like Retraining Extension of K RISP that learns from ambiguous supervision [Kate & Mooney, 2007] Uses an iterative EM-like method to gradually converge on a correct meaning for each sentence.

29 KRISPER Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Purple team is very sloppy today Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 ) 1. Assume every possible meaning for a sentence is correct

30 Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Purple team is very sloppy today Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 ) 1/2 1/3 1/2 KRISPER 2. Resulting NL-MR pairs are weighted and given to semantic parser learner

31 KRISPER Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Purple team is very sloppy today Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 ) 3. Estimate the confidence of each NL-MR pair using the resulting trained semantic parser

32 Purple team is very sloppy today KRISPER Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 ) 4. Use maximum weighted matching on a bipartite graph to find the best NL-MR pairs [Munkres, 1957]

33 Purple team is very sloppy today KRISPER Purple goalie turns the ball over to Pink8 badPass ( Purple1, Pink8 ) Pink11 looks around for a teammate Pink8 passes the ball to Pink11 Pink11 makes a long pass to Pink8 Pink8 passes back to Pink11 turnover ( Purple1, Pink8 ) pass ( Pink11, Pink8 ) pass ( Pink8, Pink11 ) ballstopped pass ( Pink8, Pink11 ) kick ( Pink11 ) kick ( Pink8) kick ( Pink11 ) kick ( Pink8 ) 5. Give the best pairs to the semantic parser learner in the next iteration, and repeat until convergence

34 Overview Sportscasting task Related works Tactical generation Strategic generation Human evaluation

35 Tactical Generation Learn how to generate NL from MR Example: Two steps 1.Disambiguate the training data 2.Learn a language generator Pass(Pink2, Pink3)  “Pink2 kicks the ball to Pink3”

36 WASPER WASP with EM-like retraining to handle ambiguous training data. Same augmentation as added to KRISP to create KRISPER.

37 First train KRISPER to disambiguate the data Then train WASP on the resulting unambiguously supervised data. KRISPER-WASP

38 WASPER-GEN Determines the best matching based on generation (MR→NL). Score each potential NL/MR pair by using the currently trained WASP -1 generator. Compute NIST MT score [NIST report, 2002] between the generated sentence and the potential matching sentence.

39 NIST scores Target: Purple2 passes quickly to Purple3 Candidate: Purple2 passes to Purple3 1-grams: Purple2, passes, to, Purple3 2-grams: Purple2 passes, passes to, to Purple3 3-grams: Purple2 passes to, passes to Purple3 4-gram: Purple2 passes to Purple3 4/4 2/3 0/2

40 System Overview Purple7 loses the ball to Pink2 SportscasterRobocup Simulator Ambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Turnover ( purple7, pink2 ) Pass ( pink5, pink8) Pass ( purple5, purple7 ) Kick ( pink2 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Ballstopped Kick ( pink8 ) Semantic Parser Purple7 loses the ball to Pink2 Unambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Kick ( pink8 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Semantic Parser Learner Turnover ( purple7, pink2 )

41 KRISPER and WASPER Purple7 loses the ball to Pink2 SportscasterRobocup Simulator Ambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Turnover ( purple7, pink2 ) Pass ( pink5, pink8) Pass ( purple5, purple7 ) Kick ( pink2 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Ballstopped Kick ( pink8 ) Semantic Parser Purple7 loses the ball to Pink2 Unambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Kick ( pink8 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Semantic Parser Learner (KRISP/WASP) Turnover ( purple7, pink2 )

42 WASPER-GEN Purple7 loses the ball to Pink2 SportscasterRobocup Simulator Ambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Turnover ( purple7, pink2 ) Pass ( pink5, pink8) Pass ( purple5, purple7 ) Kick ( pink2 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Ballstopped Kick ( pink8 ) Tactical Generator Purple7 loses the ball to Pink2 Unambiguous Training Data Pink2 kicks the ball to Pink5 Pink5 makes a long pass to Pink8 Pink8 shoots the ball Kick ( pink8 ) Pass ( pink2, pink5 ) Kick ( pink5 ) Tactical Generator Learner (WASP) Turnover ( purple7, pink2 )

43 DisambiguationLearning language generator WASPRandomWASP KRISPER (Kate & Mooney, 2007) KRISPN/A WASPERWASP KRISPER-WASPKRISPWASP WASPER-GENWASP’s language generator WASP WASP with gold matching N/AWASP Lower baseline Upper baseline Systems

44 DisambiguationLearning language generator WASPRandomWASP KRISPER (Kate & Mooney, 2007) KRISPN/A WASPERWASP KRISPER-WASPKRISPWASP WASPER-GENWASP’s language generator WASP WASP with gold matching N/AWASP Lower baseline Upper baseline Systems Matching

45 Matching 4 Robocup championship games from –Avg # events/game = 2,613 –Avg # sentences/game = 509 Leave-one-game-out cross-validation Metric: –Precision: % of system’s annotations that are correct –Recall: % of gold-standard annotations correctly produced –F-measure: Harmonic mean of precision and recall

46 Matching Results

47 DisambiguationLearning language generator WASPRandomWASP KRISPER (Kate & Mooney, 2007) KRISPN/A WASPERWASP KRISPER-WASPKRISPWASP WASPER-GENWASP’s language generator WASP WASP with gold matching N/AWASP Lower baseline Upper baseline Systems Tactical Generation

48 Tactical Generation 4 Robocup championship games from –Avg # events/game = 2,613 –Avg # sentences/game = 509 Leave-one-game-out cross-validation NIST score [NIST report, 2002] –Evaluate the quality of machine translations based on matching n-grams

49 Tactical Generation Results

50 Overview Sportscasting task Related works Tactical generation Strategic generation Human evaluation

51 Strategic Generation Generation requires not only knowing how to say something (tactical generation) but also what to say (strategic generation). For automated sportscasting, one must be able to effectively choose which events to describe.

52 Example of Strategic Generation pass ( purple7, purple6 ) ballstopped kick ( purple6 ) pass ( purple6, purple2 ) ballstopped kick ( purple2 ) pass ( purple2, purple3 ) kick ( purple3 ) badPass ( purple3, pink9 ) turnover ( purple3, pink9 )

53 Strategic Generation For each event type (e.g. pass, kick) estimate the probability that it is described by the sportscaster. Requires correct NL/MR matching –Use estimated matching from tactical generation –Iterative Generation Strategy Learning

54 Iterative Generation Strategy Learning (IGSL) Directly estimates the likelihood of an event being commented on Self-training iterations to improve estimates Uses events not associated with any NL as negative evidence

55 Strategic Generation Performance Evaluate how well the system can predict which events a human comments on Metric: –Precision: % of system’s annotations that are correct –Recall: % of gold-standard annotations correctly produced –F-measure: Harmonic mean of precision and recall

56 Strategic Generation Results

57 Overview Sportscasting task Related works Tactical generation Strategic generation Human evaluation

58 4 fluent English speakers as judges 8 commented game clips –2 minute clips randomly selected from each of the 4 games –Each clip commented once by a human, and once by the machine Presented in random counter-balanced order Judges were not told which ones were human or machine generated Human Evaluation (Quasi Turing Test)

59 Demo Clip Game clip commentated using WASPER- GEN with IGSL, since this gave the best results for generation. FreeTTS was used to synthesize speech from textual output.

60 Human Evaluation Commentator English Fluency Semantic Correctness Sportscasting Ability Human Machine Difference Score English Fluency Semantic Correctness Sportscasting Ability 5FlawlessAlwaysExcellent 4GoodUsuallyGood 3Non-nativeSometimesAverage 2DisfluentRarelyBad 1GibberishNeverTerrible

61 Future Work Expand MRs to beyond simple logic formulas Apply approach to learning situated language in a computer video-game environment (Gorniak & Roy, 2005) Apply approach to captioned images or video using computer vision to extract objects, relations, and events from real perceptual data (Fleischman & Roy, 2007)

62 Conclusion Current language learning work uses expensive, unrealistic training data. We have developed a language learning system that can learn from language paired with an ambiguous perceptual environment. We have evaluated it on the task of learning to sportscast simulated Robocup games. The system learns to sportscast almost as well as humans.