Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.

Slides:



Advertisements
Similar presentations
Improving your paper SUGGESTIONS FOR SUCCESS. Writing = Revising  Writing IS a process  This paper WILL take hard work to get a good grade (or even.
Advertisements

Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
What ’ s New? Acquiring New Information as a Process in Comprehension Suan E. Haviland & Herbert H. Clark.
WRITING CRITIQUE GROUP GUIDELINES Writing responses to your group members’ work and receiving responses from others is the most important step in revising.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 7 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Sentence Memory: A Constructive Versus Interpretive Approach Bransford, J.D., Barclay, J.R., & Franks, J.J.
What makes a paper bad? Bad organization. What causes bad organization? Failure to think your paper through.
Discussion examples Andrea Zhok.
Test Taking Tips How to help yourself with multiple choice and short answer questions for reading selections A. Caldwell.
Editing Your Paper.
Moving Around in Scratch The Basics… -You do want to have Scratch open as you will be creating a program. -Follow the instructions and if you have questions.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Strategies Good Readers Use
OOD teaches a complicated method, best for large systems. Here we teach the ten cent version. OO Design for the Rest of Us 1.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 10 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Machine Learning in Practice Lecture 2 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Publishing in Theoretical Linguistics Journals. Before you submit to a journal… Make sure the paper is as good as possible. Get any feedback that you.
Noticing language The strength of claims. The effects of musculoskeletal resistance training (RT) on the development of strength and power in a healthy.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
 To recall the influential language techniques used in persuasion.  To identify those techniques in an exam text  To explore how those techniques are.
 What’s going on here?  There’s no way to know for sure what goes on in a reader’s head. And every reader probably reads a little differently. This.
Business and Management Research
Machine Learning in Practice Lecture 18
CHAPTER 4 Designing Studies
Higher Human Biology EXAM TIPS 
What the problem looks like:
Avoiding Plagiarism: Paraphrasing/Quoting and Citation Resources
Habit 2: Begin with the End in Mind
CHAPTER 4 Designing Studies
Thesis-based Writing.
Peer Editing Rhetorical Analysis
Managing Salespeople In A Recession
Writing an informal (describing an event)
How to transform my outline to an ‘A’wesome essay!!
Title of notes: Text Annotation page 7 right side (RS)
CHAPTER 4 Designing Studies
Title of notes: Text Annotation page 7 right side (RS)
Non-Fiction Questioning Stance & Signposts
Machine Learning in Practice Lecture 11
Thinking about our Reading
Thinking about our Reading
CHAPTER 4 Designing Studies
Approaches to individual sessions with eLl writers
Chapter 4: Designing Studies
What is it? How do I write one? What is its function?
Increase Your Response Rate
How to write good. How to write good Background: Reading Read the papers. Then read them again. Then again. Write out the structure of the paper. If.
Machine Learning in Practice Lecture 7
Machine Learning in Practice Lecture 17
Computational Models of Discourse Analysis
CHAPTER 4 Designing Studies
Critical Reading Continued
Machine Learning in Practice Lecture 6
Machine Learning in Practice Lecture 27
Incorporating Quotations, Claims, & Evidence
CHAPTER 4 Designing Studies
Reading Objectives.
Computational Models of Discourse Analysis
CHAPTER 4 Designing Studies
Evaluating Classifiers
Freshman Project - Final Draft
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
Summarizing, Quoting, and Paraphrasing: Writing about research
CHAPTER 4 Designing Studies
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

Warm-Up Discussion What is social causality and what aspect of discourse analysis does it most naturally connect to? What is the connection between Girju’s approach and Engagement?  How would you characterize Girju’s approach?  What was she trying to do and why?  What can this approach be used to learn? Student Comment: I'm not really sure how to tie any of this into engagement, other than the fact that the paper confirms that affective values aren't very useful in determining the sentiment of sentences. They claimed that he is a stellar PhD student although they correctly acknowledged not having any proof. Student idea: What I think might be interesting to do with Engagement, is maybe to somehow apply this good-bad distinction to it. I'm not sure it would be useful, but it could be interesting to see if contracting really is "worse" than expanding (wrt affective value, polarity). I guess I don't understand how social causality is different, in the way they're presenting it, from ordinary causality in sentences. Does it just require that both targeted objects in each event are people? They make the claim that studying causality is novel in computational linguistics. This is just wrong on the face of it.

Student Objection I'm also confused about how the author thinks this analysis has predictive power for people's actions. I guess if we assume that events, as reported by others, are not biased and are not overly reported (likely due to social influences), then we could use something like this to predict others' reactions?...I'm not sure. My something- fishy-is-happening alarm is going off. Or am I missing some important part of the analysis? What is the implication that event mentioned generalize over tens and modality? What would it do with something like “Those Pakis [they] should be boiled in hot oil for what they did.” from the Indian blog. What is the function of reported events in a blog?

Student Comment I think Iris is right that the problem here is that it doesn't actually do much to identify facts about gender. Rather, it identifies ascribed behavior in reported events, and not even particularly generic reported events - only those events which would be reported in a very strictly structured way that only includes pronouns (and proper nouns?

Student Question I'll buy that their way of identifying relations (using pronoun templates) works. It seems like a very fast algorithm with high precision (they suggest 97%, though probably with extraordinarily low recall). Also, what does it mean to "represent" 56% of the data?

Student Question On page 68, I'm not quite sure how they got from pronouns to clauses and how they merged parts 2 and 4 in with the rest (if they did at all; this seems slightly different from their approach on page 69, but I might just be wildly confused). Clever!! P 69 is the next step – once you have the pair of clauses, you decide whether they are reciprocal or not. Any ideas on how this could be extended to the case where you use arbitrary named entities or arbitrary noun phrases rather than just pronouns?

Reciprocity Reciprocal context: if a temporal order is indicated within C1 or C2 using discourse markers like “still”, “then”, etc., then it can’t be symmetric Type of eventuality [state versus event]: eventive version might sound symmetric, but it are not – “They chased each other” implies a sequencing Modality: would, should could – markers of events Temporal order of eventualities: Part 3 might indicate an ordering

Student Question I'm also slightly confused about their symmetry results on the top of page 71; it seems that = 47, not 78. Reduction in error rate: = – = = 6 6 / = 27.7 But: 6/15.67 is around 37, so maybe the divided by the wrong thing? I'm not all that enamored of their machine learning for symmetry. 84% accuracy with 78/22 split is probably around a kappa, which is... okay? I guess? It's awkward that the features they describe were used, especially F1, since they're only using examples from the ambiguous patterns (I think). Similarly, 51% on a 3- way classification task doesn't look that great. I'm not sure what the set of classes Z is supposed to represent in their HMM. Then on the intentionality section they don't even attempt to automate it - presumably because they tried, and performance was even worse than the other two classifiers. I don't at all like that the different dimensions are being treated with such enormous differences between them.

Student Question I also don't understand how a HMM is useful for their purposes and also how the affective value works more than what I implemented for assignment 3. Eric: explain how this connects with what you did for assignment 3. Intuition: based on what you know from the sentiment lexicon, you can notice tendencies in ordering of positive and negative sentiment in eventualities, and you can use those regularities to learn sentiment associations for words where the sentiment is not known. Note that it was not very accurate

Student question The author's final paragraph failed to convince me that social causality has much use if any at all because there is no way to normalize the affective aspects of causality. I could see potential use in suicide prevention--trying to understand blogs of people who have attempted or are at risk of attempting suicide before the blog's respective author is thrown overboard.

Questions?