Presentation is loading. Please wait.

Presentation is loading. Please wait.

November 2014 Project Review at ARL Jaime Carbonell (CMU) & Team

Similar presentations


Presentation on theme: "November 2014 Project Review at ARL Jaime Carbonell (CMU) & Team"— Presentation transcript:

1 The Linguistic-Core Approach to Structured Translation and Analysis of Low-Resource Languages
November 2014 Project Review at ARL Jaime Carbonell (CMU) & Team MURI via ARO (PM: Joseph Myers)

2 The Faculty CMU: USC-ISI: MIT: UT Austin: Jaime Carbonell Kevin Knight
David Chiang (Notre Dame) MIT: UT Austin: Regina Barzilay Jason Baldridge Lori Levin Noah Smith Chris Dyer Supporting roles: 2 other PhDs, 8 Grad Students, 3 Postdocs, N UGs,

3 LCMT: The Elevator Pitch
The fundamental challenge “Modern” MT requires massive parallel data There are L’s with scant ||-data Rule-based MT requires extensive trained-linguist efforts The linguistic-core approach Goal: 90% linguistic benefit with 10% linguist effort Annotation deep and light, linguistics  “lay” bilinguals Augmented with machine learning from bi & mono-L text Accomplishments to date Theory: GFL, graph-semantics, AMR & other parsers, sparse ML training, linguistically-anchored models, …  40+ papers Tool suites: GFL, TurboParser, MT-in-works, Morph, SuperTag,… Languages: Kiriwanda, Malagasy, Swhhili, Yoruba

4 The Setting MURI Languages Swahili Morpho-syntactics Anamwona
Kinyarwanda Bantu (7.5M speakers) Malagasy Malayo-Polynesian (14.5M) Swahili Bantu (5M native, 150M 2nd/3rd) Yoruba Niger-Congo (22+M) Swahili Anamwona “he is seeing him/her” Morpho-syntactics

5 Which MT Paradigms are Best? Towards Filling the Table
Target Large T Med T Small T Large S SMT LCMT Med S ??? Small S Source “Old” DARPA MT: Large S  Large T Arabic  English; Chinese  English

6 Evolutionary Tree of MT Paradigms Leading up to LCMT
Large-scale TMT Large-scale TMT Transfer MT Transfer MT w stat phrases Interlingua MT Context-Based MT LCMT Analogy MT Example-based MT SMT with syntax DecodingMT Statistical MT Phrasal SMT 1950 1980 2014

7 Linguistically omnivorous parsing
GFL annotated corpus CMU, Texas Parsers Linguistic universals He has been writing a letter. Dependencies (j / join :ARG0 (p / person :name (p2 / name :op1 "Pierre" :op2 "Vinken") :age (t / temporal-quantity :quant :unit (y / year))) :ARG1 (b / board) :prep-as (d2 / director :mod (e / executive :polarity -)) :time (d / date-entity :month 11 :day 29)) Abstract Meaning Reps ISI CMU MIT Texas, MIT Small CCG Lexicon Texas Unannotated corpus

8 Original Vision Linguistic Core Team Data:
Data selection for annotation MT Visualizations and logs Linguistic Core Team (LL, JB, SV, JC) MT Error Analysis Data: Parallel Monolingual Elicited Related language Multi-parallel Comparable Elicitation corpus Hand-built Linguistic Core Parser, Taggers, Morph. Analyzers MT Features MT Systems Team (KK, DC, SV, JC) Triple Gold Data MT Systems Triple Ungold Data Linguistic Analyzers Team (NS, RB, JB) Inference Algorithms

9 Current Vision Linguistic Core Team Data:
Data selection for annotation Complex Morph Analyzers Linguistic Core Team (LL, JB, CD, JC) MT/TA Error Analysis Data: Parallel Monolingual Elicited Related language Multi-parallel Comparable Elicitation corpus Hand-built Linguistic Core Parser, Taggers, Semantic analyzers Dependency parses MT Systems Team (KK, DC, CD, JC) Triple Gold and GFL annotated MT Systems and TA modules String/tree/graph transducers Linguistic Analyzers Team (NS, RB, JB) Semantic Parsing Algorithms Definiteness/ Discourse

10 PFA Node Alignment Algorithm Example
Tree-tree aligner enforces equivalence constraints and optimizes over terminal alignment scores (words/phrases) Resulting aligned nodes are highlighted in figure Transfer rules are partially lexicalized and read off tree. Proprietary and Confidential 10

11 LCMT: NLP Workflow and Tools
GFL annotator Framework CMU (current) CMU Supervised POS Taggers Annotated Data CMU CMU + Texas ISI Supervised Dependency & AMR Parsers Tree-Graph Syn/Sem trx Texas MIT CMU Unannotated Data Semisupervised POS Taggers Semisupervised Dependency Parsers Texas MIT Unsupervised Dependency Parsers = Toolsuite software (more to come)

12 Machine Translation Paradigms
Phrase-based MT (LCMT 20+% of effort) Morph-Syntax-based MT (LCMT 30+%) Meaning-based MT (LCMT 40+%) source string target string NIST 2009 c2e source string source tree target tree target string source string source tree meaning representation target tree target string

13 Some Key Results to Date
Theory of transducers (string, tree, graph) Massive Lexical borrowing across diverse languages Linguistic universals Dependencies, semantic roles, conservation, AMR, discourse, … Statistical learning over strings, trees, graphs Bayesian, HMM/CRF, active sampling  model parameters Parsing into deep semantics (AMR) MT demonstrations: Focus on M, K, S Y, but also across ~20 languages (WMT honors, synthetic phrases) A suite of 11 serious software modules and tools (morphology, variable-depth linguistic annotation, dependency parsing, MT, …) Current scientific challenges Is general graph topology induction possible? Bridging structural divergences via semi-universals? Semantic invariance: lexical, structural, non-propositional?

14 List of “Firsts” for the Linguistic Core
First use of models incorporating linguistic knowledge in the form of hand-written morpho- grammatical rules combined with limited-volume corpus statistics First use of models of lexical "borrowing" from other (major) languages to improve translation and analysis of low resource languages (publication in prep). First efficient and exact probabilistic model for structured prediction with arbitrary syntactic and semantic dependencies derived from the input language. First exploitation of large monolingual foreign text collections (vs bilingually translated collections) to improve low-density MT, via treating foreign text as a mapped/encoded version of English. First application of formal graph transduction theory to natural language analysis; earlier efforts applied to string transduction and tree transduction theory only. First substantial corpora annotated cheaply by novices used to build effective NLP tools First statistical parser to map language into abstract meaning representation of semantics First to show that for resource-impoverished languages, a multilingual parser based on language universals outperforms a target language parser target language First analyses to prove formally and empirically demonstrate that inference in dependency parsing is computationally easy on average case (despite NP-hard for the worst case).

15 External Honors for the LC Project
Best human judgments of English-Russian translations at WMT2013 Best BLEU on Hindi-English translation at WMT2014 Best student paper, ACL 2014 Low-Rank Tensors for Scoring Dependency Structures. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay and Tommi Jaakkola. Best paper, honorable mention, ACL 2014 A Discriminative Graph-Based Parser for the Abstract Meaning Representation. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer and Noah A. Smith. Best paper, runner up, EMNLP 2014 Language Modeling with Power Low Rank Ensembles. Best paper (one of four), NIPS 2014 Conditional Random Field Autoencoders for Unsupervised Structured Prediction. Waleed Ammar, Chris Dyer, and Noah A. Smith

16 Lexical Borrowing of Common Words

17 Swahili morphology using a crowdsourced lexicon
Patrick Littell, Lori Levin, Chris Dyer No provenance: the root of the word was collected by hand. [GUESS1]: the root is inferred from the Kamusi lexicon part of speech tag including noun class. [GUESS2]: the root is from Kamusi, but no noun class is given. [GUESS3]: possible English loan word [GUESS4]: complete guess FST written by Patrick Littell Lexicon extracted from dictionaries and textbooks

18 Parsing Progress (F1) On CoNNL Dataset: (CMU), (MIT)

19 CCG Supertagging the lazy dogs wander np/n n/n n n np np n/n (s\np)/np
Linguistically- Motivated Priors the lazy dogs wander np/n n/n n n HMM np np n/n (s\np)/np np/n s\np

20 Parsing into AMR (ACL 2014 honorable mention for best paper)
Approximately guards patrol the kilometre border between Russia and Afghanistan. a2 / approximately 11000 p / patrol-01 g / guard (d4 / distance-quantity :unit (k2 / kilometer) 1200 b / border b2 / between (c / country :name (n / name :op1 "Russia")) (c2 / country :name (n2 / name :op1 "Afghanistan")) Add relations (p / patrol-01 :ARG0 (g / guard :quant (a2 / approximately :op )) :ARG1 (b / border :quant (d4 / distance-quantity :unit (k2 / kilometer) :quant 1200) :location (b2 / between :op1 (c / country :name (n / name :op1 "Russia")) :op2 (c2 / country :name (n2 / name :op1 "Afghanistan"))))) New Results: 61% F1 CMU and ISI

21 Unsupervised Part-of-Speech Tagging
conditional random field autoencoder featurized hidden Markov model classic hidden Markov model V-measure (higher is better) Arabic Basque Danish Greek Hungarian Italian Kin. Mal. Turkish Ave.

22 Automatic Classification of the Communicative Functions of Definiteness
Predicted semantic functions of definiteness: 78.2% accuracy Annotated Corpus Semantics of Definiteness Logistic regression classifier Syntactic features extracted from dependency parser Why Definiteness: One instance of non-propositional semantics Major determinant of word order Wildly divergent in morpho-syntactic expression Problems in word alignment and language models

23 Integrating Alignment and Decipherment for Better Low-Density MT
Small bilingual Malagasy/English text (need to align words [Brown et al 93]) joint Large Malagasy monolingual text (need to decipher [Dou & Knight 13]) Decipherment helps Word Alignment Decipherment helps Machine Translation Bleu ISI jointly with CMU/Texas/MIT

24 Graph Formalisms for Language Understanding and Generation
String Automata Algorithms Tree Automata Algorithms Graph Automata Algorithms N-best answer extraction … paths through an WFSA (Viterbi, 1967; Eppstein, 1998) … trees in a weighted forest (Jiménez & Marzal, 2000; Huang & Chiang, 2005) Investigating: Linguistically adequate representations Efficient algorithms Using them in: Text  Meaning (NLU) Meaning  Text (NLG) Meaning-based MT Unsupervised EM training Forward-backward EM (Baum/Welch, 1971; Eisner 2003) Tree transducer EM training (Graehl & Knight, 2004) Determinization, minimization … of weighted string acceptors (Mohri, 1997) … of weighted tree acceptors (Borchardt & Vogler, 2003; May & Knight, 2005) Intersection WFSA intersection Tree acceptor intersection Application of transducers string  WFST  WFSA tree  TT  weighted tree acceptor Composition of transducers WFST composition (Pereira & Riley, 1996) Many tree transducers not closed under composition (Maletti et al 09) Software tools Carmel, OpenFST Tiburon (May & Knight 10) ISI jointly with CMU

25 Supervision (David) Supervision (Azim)
Tajik Corpus from Leipzig Archive Tajik Reference Grammar (Perry, 2005) PerLex Persian Lexicon (Sagot and Walther, 2010) Tajik and Persian Wikipedias Persian Treebank (Rasooli et al., 2013) IPA Converter (Kartik, Pat, Chris) Persian-Tajik Converter (Chris) Supervision (David) Supervision (Azim) NE Gold Standard (native speaker) (Azim) NE Pyrite Standard (linguist) (Alexa, Lori) Morphological Analyzers (Swabha, Chris) Brown Clusters (Kartik) Morphology Lists for NE (Alexa, David) Gazetteers (Pat, Chris) Tajik POS Tagger (Chris) Named Entity Recognizer (Kartik, Chris)

26 New Results for Graph Automata for Mapping Between Text and Meaning
Strings Graphs FSA CFG DAG acceptor HRG probabilistic yes intersects with finite-state EM training transduction O(n) O(n3) O(|Q|T+1n) O((3dn)T+1) implemented d = graph degree for AMR, high in practice T = treewidth complexity for AMR, low in practice (2-3)

27 Next Steps (high level overview)
Finalize MT systems: K, M, S, Y Package and make available externally Possibly integrate with government translator workbench Compare with Govt systems when available and appropriate (e.g. Malagasy with Carl Rubino) Complete scientific investigations (Graph transduction, MT with AMR, supertagging  parsing, borrowing++, …) Document and distribute tool suites (rapid annotation, morphology, CCG supertagging, dependency parsing, AMR parsing, generation, end-to-end MT, lexicon borrowing, ML modules, …)  15 +/- Publish, publish, publish (40+ papers and counting) Detailed next steps at the end of each major presentation

28 THANK YOU! Jaime Carbonell, CMU

29 Select/show as needed for discussion period
Supplementary Slides Select/show as needed for discussion period

30 Tag Dictionary Generalization
Raw Corpus ________ Type Annotations ____________ the DT PRE2_th PRE1_t SUF1_g dog NN TYPE_the TYPE_thug TYPE_dog PREV_<b> PREV_the NEXT_walks TOK_the_4 TOK_the_1 TOK_thug_5 TOK_dog_2 Token Annotations ____________ Any arbitrary features could be added the dog walks DT NN VBZ . .

31 “These 7 people include astronauts coming from France and Russia”
RULE 15: S(x0:NP, x1:VP, x2:PUNC)  x0 , x1 , x2 RULE 14: VP(x0:VBP, x1:NP)  x0 , x1 “include astronauts coming from France and Russia” “astronauts coming from France and Russia” RULE 16: NP(x0:NP, x1:VP)  x1 , 的 , x0 “coming from France and Russia” RULE 11: VP(VBG(coming), PP(IN(from), x0:NP))  来自 , x0 “France and Russia” “these 7 people” RULE 13: NP(x0:NNP, x1:CC, x2:NNP)  x0 , x1 , x2 RULE 10: NP(x0:DT, CD(7), NNS(people)  x0 , 7人 “these” “include” “France” “&” “Russia” “astronauts” “.” RULE 1: DT(these)  这 RULE 2: VBP(include)  中包括 RULE 4: NNP(France)  法国 RULE 5: CC(and)  和 RULE 6: NNP(Russia)  俄罗斯 RULE 8: NP(NNS(astronauts))  宇航 , 员 RULE 9: PUNC(.)  . 这 7人 中包括 来自 法国 和 俄罗斯 的 宇航 员

32 Model Minimization <b> The man saw the saw <b> 1.0 1.0 <b> DT NN VBD 1.0 0.4 1.0 0.6 0.7 0.2 0.8 0.3 . .

33 Linguistically opportunistic parsing
GFL annotated corpus CMU, Texas Parsers Linguistic universals He has been writing a letter. Dependencies (j / join :ARG0 (p / person :name (p2 / name :op1 "Pierre" :op2 "Vinken") :age (t / temporal-quantity :quant :unit (y / year))) :ARG1 (b / board) :prep-as (d2 / director :mod (e / executive :polarity -)) :time (d / date-entity :month 11 :day 29)) Abstract Meaning Reps ISI CMU MIT Texas, MIT Small CCG Lexicon Texas Unannotated corpus

34 Fragmentary Unlabeled Dependency Grammar (Schneider, O’Connor, Saphra, Bamman, Faruqui, Smith, Dyer, and Baldridge, 2013) Represents unlabeled dependencies Special handling for: multiword expressions coordination anaphora Allows underspecification Graph fragment language for easy annotation

35 Graph Fragment Language (GFL)
Provide a detailed analysis of coordination… {Our three} > weapons > are < $a $a :: {fear surprise efficiency} :: {and~1 and~2} ruthless > efficiency (Our three weapons*) > are < (fear surprise and ruthless efficiency) Or focus just on the high level… (((Ataon’ < (ny > mpanao < fihetsiketsehana)) < hoe < mpikiky < manko) < (i > Gaddafi)) Atoan’ < noho < (ny_1 > kabariny < lavareny) Provide detailed syntactic dependency structure The Malagasy sentence means “Gaddafi has referred to protesters as rodents in his rambling speeches.” The sentence is in the tradition (but typologically unusual) VOS order (it’s actually VOS + PP, where the last PP is “in his rambling speeches”) Ataon’ < (ny_1 mpanao* fihetsiketsehana) Atoan’ < (hoe* mpikiky manko) Atoan’ < (i Gaddafi*) Atoan’ < noho < (ny_2 kabariny lavareny) Or focus on predictate/arguments “Gaddafi has referred to protesters as rodents in his rambling speeches.”

36 GFL (CMU/Texas) & AMR (ISI)
The classic: “Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 29 .” ( (S (NP-SBJ (NP (NNP Pierre) (NNP Vinken) ) (, ,) (ADJP (NP (CD 61) (NNS years) ) (JJ old) ) (, ,) ) (VP (MD will) (VP (VB join) (NP (DT the) (NN board) ) (PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director) )) (NP-TMP (NNP Nov.) (CD 29) ))) (. .) )) join < [ Pierre Vinken ]join < boardjoin < as < directorjoin < [Nov. 29]nonexecutive > director61 > years > old > [ Pierre Vinken ] (j / join :ARG0 (p / person :name (p2 / name :op1 "Pierre" :op2 "Vinken") :age (t / temporal-quantity :quant :unit (y / year))) :ARG1 (b / board) :prep-as (d2 / director :mod (e / executive :polarity -)) :time (d / date-entity :month 11 :day 29))

37 Probabilistic Graph Grammars
WITH BASIC RULES LIKE THIS: WE CAN DERIVE AND TRANSFORM SEMANTIC GRAPHS: ARG1 instance instance ARG1 ARG0 ARG1 instance ARG0 X ARG1 instance WANT WANT BELIEVE ARG0 WANT “the boy wants something involving himself” instance GIRL “the boy wants the girl to believe he is wanted” instance B BOY

38 Example Parsing into AMR
Approximately guards patrol the kilometre border between Russia and Afghanistan. (p / patrol-01 :ARG0 (g / guard :quant (a2 / approximately :op )) :ARG1 (b / border :quant (d4 / distance-quantity :unit (k2 / kilometer) :quant 1200) :location (b2 / between :op1 (c / country :name (n / name :op1 "Russia")) :op2 (c2 / country :name (n2 / name :op1 "Afghanistan"))))) a2 / approximately 11000 p / patrol-01 g / guard (d4 / distance-quantity :unit (k2 / kilometer) 1200 b / border b2 / between (c / country :name (n / name :op1 "Russia")) (c2 / country :name (n2 / name :op1 "Afghanistan")) Add relations

39

40 Deciphering Foreign Language
Accuracy of learned bilingual dictionary test on Spanish not translations of each other (Dou & Knight 2013) Dependency-based: Linguistic analysis helps substantially! English text Foreign text Deciphering Engine (Dou & Knight 2012) Ngram-based bilingual word-for-word dictionary How much foreign text (running words)

41 Constituent Structure Trees
Strength: You can use tests for constituency (movement, deletion, substitution, coordination) to get reproducible results for corpus annotation. Weakness: 1. Tests for constituency sometimes fail to provide reproducible results. The five trees (based on an exercise in Radford , 1988) have each been proposed in a published paper and can each be defended by tests for constituency. 2. People do not have uniform intuitions about which tree is “correct”.

42 Morpho-syntactics Iñupiaq (North Slope Alaska)
Tauqsiġñiaġviŋmuŋniaŋitchugut. ‘We won’t go to the store.’

43 Mathematical Foundations for Semantics-Based Machine Translation
Previous MT systems have been based on clean string automata and tree automata General purpose algorithms have been worked out (in part by MT scientists), with wide applicability software toolkits even implement those algorithms But new models of meaning-based MT deal in semantic graph structures Foreign string  Meaning graph  English string QUESTION: Do efficient, general-purpose algorithms for graph automata exist to support these linguistic models?

44 General-Purpose Algorithms for Manipulating Linguistic Structures: Acceptors
String Acceptors successfully applied to speech recognition Tree Acceptors successfully applied to syntax-based MT Graph Acceptors now being applied to semantics-based MT Membership checking ... ... of string (length n) in WFSA. O(n) if WFSA is determinized. ... of tree in forest. O(n) if determinized. ... of graph in hyperedge-replacement grammar (HERG) (Drewes 97) New algorithm: Chiang (forthcoming), O(2dn)k+1 : d & n properties of individual grammar k-best … … best k paths through an WFSA with n states and e edges (Viterbi 67; Eppstein 98) O(e + n log n + k log k) … trees in a weighted forest (Jiménez & Marzal 00; Huang & Chiang 05) O(e + n k log k) ... graphs in weighted HERG. Efficient Huang & Chiang results carry over. EM training of probabilistic weights Forward-backward EM (Baum/Welch 71; Eisner 03) O(n) Tree acceptor training (Graehl & Knight 04) Efficient Graehl & Knight results carry over. Intersection WFSA intersection O(n2) classical Tree acceptor intersection Graph acceptor intersection NOT CLOSED (in general) co-PI supported under MURI project

45 General-Purpose Algorithms for Feature Structures (Graphs)
String World Tree World Graph World Acceptor Finite-state acceptors Tree automata HRG Transducer Finite-state transducers Tree transducers Synchronous HRG Membership checking O(n) O(n) for trees O(n3) for strings O(nk+1) for graphs N-best … … paths through an WFSA (Viterbi, 1967; Eppstein, 1998) … trees in a weighted forest (Jiménez & Marzal, 2000; Huang & Chiang, 2005) … graphs in a weighted forest EM training Forward-backward EM (Baum/Welch, 1971; Eisner 2003) Tree transducer EM training (Graehl & Knight, 2004) EM on forests of graphs Intersection WFSA intersection Tree acceptor intersection Not closed Transducer composition WFST composition (Pereira & Riley, 1996) Many tree transducers not closed under composition (Maletti et al 09) General tools Carmel, OpenFST Tiburon (May & Knight 10) Bolinas

46 Functional Collaboration
Data selection for annotation MT Visualizations and logs Linguistic Core Team (LL, JB, SV, JC) MT Error Analysis Data: Parallel Monolingual Elicited Related language Multi-parallel Comparable Elicitation corpus Hand-built Linguistic Core Parser, Taggers, Morph. Analyzers MT Features MT Systems Team (KK, DC, SV, JC) Triple Gold Data MT Systems Triple Ungold Data Linguistic Analyzers Team (NS, RB, JB) Inference Algorithms

47 Malagasy - English Resources
Malagasy Resources Tokens Types Hapax Bible (Year 1) 579,578 19,460 8,401 Leipzig corpus (Year 2) 618,282 41,462 23,659 CMU Global Voices (Year 2) 2,148,976 84,744 46,627 Total 3,346,836 115,172 62,517 Malagasy - English Resources eng-Tokens eng-Types mlg-Tokens mlg-Types Bible (Year 1) 584,872 13,084 579,578 19,460 CMU Global Voices (Year 2) 1,785,472 63,357 2,148,976 84,744 Total 2,370,344 67,790 3,346,836 115,172

48 Evolutionary Tree of MT Paradigms Prior to LCMT
Large-scale TMT Large-scale TMT Transfer MT Transfer MT w stat phrases Interlingua MT Context-Based MT LCMT Analogy MT Example-based MT SMT on syntax struct. DecodingMT Statistical MT Phrasal SMT 1950 1980 2012

49 Model Parameters Distribution over number of arguments given the parent tag Weights for selection features, shared across all set sizes Weights for ordering features All parameters are shared across languages

50 Malagasy Language Modeling
Data Seq. X-ent Word X-ent Total X-ent. Perplexity OOVs 3-gram+char Bible 10.35 7.66 18.01 264,323 23.94% GV 7.02 1.14 8.16 286.0 3.30% 3-gram+morph 0.90 7.92 241.4 Successes Malagasy analyzer has << 100% coverage, but we still get substantial gains Year 3 Goals Improve word sequence model with morphosyntactic information Improve coverage of Malagasy morphological phenomena Incorporation in MT system Kinyarwanda analyzer/generator under development Cross entropy, perplexity, and OOV rates were computed on the MT test set.

51 How CMU ISI UT and MIT collaborate
Monthly teleconference calls Focused on management and project coordination Technical topics follow when appropriate Semi-annual face-to-face meetings Last ones in Nov 2012 and March 2013 Include students/postdocs, etc. Focused on science Much more frequent focused calls/chats/etc. Data collection, annotations, SW APIs, brainstorming new algorithms, … Sharing/reviewing results and papers Website/repository + shared SW/data sets + papers + more goodies Student exchanges (e.g. week, month, summer) Occasional individual faculty trips Combined research (GFL, AMR parsing, CCG parsing, decipherment,…)


Download ppt "November 2014 Project Review at ARL Jaime Carbonell (CMU) & Team"

Similar presentations


Ads by Google