Download presentation
Presentation is loading. Please wait.
1
NLP: Why? How much? How? Peter Wiemer-Hastings
2
Why NLP? Intro: once upon a time, I was a grad student and worked on MUC. Learned: –the NLP was as good as we made it –i.e. direct correlation between # of person- hours (KE) and performance. Result: pragmatic approach to NLP
3
One application of NLP: ITS Class of program which interacts with students, "working" content. Working: interacting via: –menu (i.e. multiple choice) –single word answers –solving problems, etc.
4
AutoTutor Project: –Effective learning is constructive (students connect new knowledge to pre-existing) It's more effective when students say full sentences. –Prior DBITS couldn't handle complexity of processing completely free NLI (w/ misspellings, ungrammaticality, ambiguity, insanity, profanity, etc) computational complexity and KE quantity –Modeled on human tutors (for better or worse) Don't know what students do and don't know Don't exactly understand student answers Do compare student answers to expected answers. Student answers to "Do you understand" NEGATIVELY correlated with their understanding.
5
The Challenge from an NLP point of view Allow (coerce) students to enter free text. Tutor understands it well enough to keep the conversation going. Context: –ALARM (Automated Learning Aid for Research Methods) –Used by Psych Res Meth students for 5 th term now
8
My approach to NLP: LSA Latent Semantic Analysis –a corpus-derived, –vector-based –representation for semantics. –Allows similarity comparison of texts. –Ignores word order. –Meaning of sentence is sum of word vectors.
9
Support for LSA Philosophy –Words are similar in meaning when they are used in similar ways (Wittgenstein, 1958) Basic Psychology –Humans learn from co-occurrence (Ramscar,’01) Cognitive modeling –Lexical acquisition (Landauer & Dumais, 1997) –TOEFL test (Landauer, 1997) –Choosing texts to learn from (Wolfe et al, 1998) –Grading essays (Foltz, 1996) –Metaphor (Kintsch, 2000) –Tutorial dialogues (Wiemer-Hastings, 1998, 1999) –Modeling human sentence similarity judgments (Wiemer- Hastings, 2000, 2001)
10
Intro to LSA Representations: –Word –Sentence –Similarity between sentences How it’s trained How it’s used –Create sentence vectors by adding word vectors –Human sentence similarity ratings r=0.52 –LSA with Humans: r=0.48 –AutoTutor intelligent tutoring system uses LSA to judge correctness of student answers by comparing them to target good answers. )cos( Doc1 Doc2 … Doc D
11
Is no syntax enough? LSA is –Good for single words –Good for longer texts (> 200 words) –Bad for sentences (Wiemer-Hastings, 1999; Kintsch, personal communication) Biq question: Can adding Syntax to LSA improve its match to human judgments? (If so, how?) Our (first) approach: –Resolve anaphora. –Segment sentences into subject, verb, and object strings. –Compare segments separately with LSA. 0.380.870.98 0.76
12
Comparing with humans Materials: –300 Sentence pairs from Computer literacy tutoring corpus, one student answer, and one target good answer Procedure: –Collect similarity ratings from 4 raters: 2 experts, 2 novices Results –Humans: r(experts) = 0.50, r(novices) = 0.36 –Humans with (S)LSA: Discussion –Adding syntax gives better match to novices –Best match: SLSA
13
Back to Apps RMT extensions: emotions, mixed language StoryStation SAIF: Sourcer's Apprentice Intelligent Feedback SWAE: Supporting Writing of Argumentative Essays
14
Example Tutor: How would a latin square design help fix the problem? Student: I don’t know. Tutor: Here’s an example: What do you notice about each condition here? Student: Each one occurs in each position. Tutor: Yes! Like this: 1234 2143 3412 4321
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.