Download presentation
Presentation is loading. Please wait.
1
Modeling User Interactions in Social Media Eugene Agichtein Emory University
2
Outline User-generated content Community Question Answering Contributor authority Content quality Asker satisfaction Open problems
3
3
4
Trends in search and social media Search in the East: – Heavily influenced by social media: Naver, Baidu Knows, TaskCn,.. Search in the West: – Social media mostly indexed/integrated in search repositories Two opposite trends in social media search: – Moving towards point relevance (answers, knowledge search) – Moving towards browsing experience, subscription/push model How to integrate “active” engagement and contribution with “passive” viewing of content?
5
Social Media Today Published: 4Gb/day Social Media: 10Gb/Day Page views: 180-200Gb/day Technorati+Blogpulse ~120M blogs ~2M posts/day Twitter: since 11/07: ~2M users ~3M msgs/day Facebook/Myspace: 200-300M users Average 19 min/day Yahoo Answers 90M users, ~20M questions, ~400M answers [From Andrew Tomkins/Yahoo!, SSM2008 Keynote]
6
People Helping People Naver: popularity reportedly exceeds web search Yahoo! Answers: some users answer thousands of questions daily – And get a t-shirt Open, “quirky”, information shared, not “sold” Unlike Wikipedia: – Chatty threads: opinions, support, validation – No core group of moderators to enforce “quality” 6
7
Where is the nearest car rental to Carnegie Mellon University?
8
8
9
9
10
10 Successful Search Give up on “magic”. Lookup CMU address/zipcode Google maps Query: “car rental near:5000 Forbes Avenue Pittsburgh, PA 15213”
11
11 Total time: 7-10 minutes, active “work”
12
Someone must know this…
13
13 +0 minutes : 11pm
14
14
15
15
16
16 +1 minute
17
17 +36 minutes
18
+7 hours: perfect answer
19
Why would one wait hours? Rational thinking: effective use of time Unique information need Subjective/normative question Complex Human contact/community Multiple viewpoints
20
20 http://answers.yahoo.com/question/index;_ylt=3?qid=20071008115118AAh1HdO
21
21 Challenges in ____ing Social Media Estimating contributor expertise Estimating content quality Infering user intent Predicting satisfaction: general, personalized Matching askers with answerers Searching archives Detecting spam
22
22 Work done in collaboration with: Qi Guo Yandong Liu Abulimiti Aji Thanks: Prof. Hongyuan Zha Jiang Bian Yahoo! Research: ChaTo Castillo, Gilad Mishne, Aris Gionis, Debora Donato, Ravi Kumar Pawel Jurczyk
23
Related Work Adamic et al., WWW 2007, WWW 2008 – Expertise sharing, network structure Kumar et al.: Info diffusion in blogspace Harper et al., CHI 2008: Answer quality Lescovec et al: Cascades, preferential attachment models Glance & Hurst: Blogging Kraut et al.: community participation and retention SSM 2008 Workshop (Searching Social Media) Elsas et al, blog search, ICWSM 2008s 23
24
24 Estimating Contributor Authority Question 1 Question 2 Answer 5 Answer 1 Answer 2 Answer 4 Answer 3 User 1 User 2 User 3 User 6 User 4 User 5 Answer 6 Question 3 User 1 User 2 User 3 User 6 User 4 User 5 P. Jurczyk and E. Agichtein, Discovering Authorities in Question Answer Communities Using Link Analysis (poster), CIKM 2007 Hub (asker) Authority (answerer)
25
25 Finding Authorities: Results
26
26 Qualitative Observations HITS effective HITS ineffective
27
27 Trolls
28
28 Estimating Content Quality E. Agichtein, C. Castillo, D. Donato, A. Gionis, G. Mishne, Finding High Quality Content in Social Media, WSDM 2008
29
29 29
30
30 30
31
31 31
32
32 32
33
33 33Community
34
34 34
35
35 35
36
36
37
37 37
38
38 from all subsets, as follows: UQV Average number of "stars" to questions by the same asker. ; The punctuation density in the question's subject. ; The question's category (assigned by the asker). ; \Normalized Clickthrough:" The number of clicks on the question thread, normalized by the average number of clicks for all questions in its category. UAV Average number of "Thumbs up" received by answers written by the asker of the current question. ; Number of words per sentence. UA Average number of answers with references (URLs) given by the asker of the current question. UQ Fraction of questions asked by the asker in which he opens the question's answers to voting (instead of pick- ing the best answer by hand). UQ Average length of the questions by the asker. UAV The number of \best answers" authored by the user. U The number of days the user was active in the system. UAV \Thumbs up" received by the answers wrote by the asker of the current question, minus \thumbs down", divided by total number of \thumbs" received. ; \Clicks over Views:" The number of clicks on a ques- tion thread divided by the number of times the ques- tion thread was retrieved as a search result (see [2]). ; The KL-divergence between the question's language model and a model estimated from a collection of ques- tion answered by the Yahoo editorial team (available in http://ask.yahoo.com).
39
39 39
40
40 ; Answer length. ; The number of words in the answer with a corpus fre- quency larger than c. UAV The number of \thumbs up" minus \thumbs down" re- ceived by the answerer, divided by the total number of \thumbs" s/he has received. ; The entropy of the trigram character-level model of the answer. UAV The fraction of answers of the answerer that have been picked as best answers (either by the askers of such questions, or by a community voting). ; The unique number of words in the answer. U Average number of abuse reports received by the an- swerer over all his/her questions and answers. UAV Average number of abuse reports received by the an- swerer over his/her answers. ; The non-stopword word overlap between the question and the answer. ; The Kincaid [21] score of the answer. QUA The average number of answers received by the ques- tions asked by the asker of this answer. ; The ratio between the length of the question and the length of the answer. UAV The number of \thumbs up" minus \thumbs down" re- ceived by the answerer. QUAV The average numbers of \thumbs" received by the an- swers to other questions asked by the asker of this an- swer.
41
Rating Dynamics 41
42
42 42 Editorial Quality != Popularity != Usefulness
43
43 Yahoo! Answers: Time to Fulfillment 1. 2006 FIFA World Cup 2. Optical 3. Poetry 4. Football (American) 5. Scottish Football (Soccer) Time to close a question (hours) for sample question categories Time to close (hours) 6. Medicine 7. Winter Sports 8. Special Education 9. General Health Care 10. Outdoor Recreation
44
44 Predicting Asker Satisfaction Given a question submitted by an asker in CQA, predict whether the user will be satisfied with the answers contributed by the community. – “Satisfied” : The asker has closed the question AND The asker has closed the question AND Selected the best answer AND Selected the best answer AND Rated best answer >= 3 “stars” Rated best answer >= 3 “stars” – Else, “Unsatisfied Yandong Liu Jiang Bian Y. Liu, J. Bian, and E. Agichtein, Predicting Information Seeker Satisfaction in Community Question Answering, in SIGIR 2008
45
45 Motivation Save time: don’t bother to post Suggest a good forum for information need Notify user when satisfactory answer contributed From “relevance” to information need fulfillment Explicit ratings from asker & community
46
46 ASP: Asker Satisfaction Prediction asker is satisfied asker is not satisfied Text Category Answerer History Asker History Answer Question Wikipedia News Classifier
47
47Datasets Crawled from Yahoo! Answers in early 2008 (Thanks, Yahoo!)QuestionsAnswersAskersCategories % Satisfied 216,1701,963,615158,51510050.7% Available at http://ir.mathcs.emory.edu/shared
48
48 Dataset Statistics Category#Q#A #A per Q Satisfied Avg asker rating Time to close by asker 2006 FIFA World Cup(TM) 119435659329.8655.4%2.63 47 minutes Mental Health 15111597.6870.9%4.30 1 day and 13 hours Mathematics65123293.5844.5%4.48 33 minutes Diet & Fitness 45024365.4168.4%4.30 1.5 days Asker satisfaction varies by category #Q, #A, Time to close … -> Asker Satisfaction
49
49 Satisfaction Prediction: Human Judges Truth: asker’s rating Truth: asker’s rating A random sample of 130 questions A random sample of 130 questions Researchers Researchers – Agreement: 0.82 F1: 0.45 Amazon Mechanical Turk Amazon Mechanical Turk – Five workers per question. – Agreement: 0.9 F1: 0.61. – Best when at least 4 out of 5 raters agree
50
50 ASP vs. Humans (F1) ClassifierWith TextWithout TextSelected Features ASP_SVM0.690.720.62 ASP_C4.50.750.760.77 ASP_RandomForest0.700.740.68 ASP_Boosting0.67 ASP_NB0.610.650.58 Best Human Perf0.61 Baseline (naïve)0.66 ASP is significantly more effective than humans Human F1 is lower than the na ï ve baseline!
51
51 Features by Information Gain 0.14219 Q: Askers’ previous rating 0.13965 Q: Average past rating by asker 0.10237 UH: Member since (interval) 0.04878 UH: Average # answers for by past Q 0.04878 UH: Previous Q resolved for the asker 0.04381 CA: Average rating for the category 0.04306 UH: Total number of answers received 0.03274 CA: Average voter rating 0.03159 Q: Question posting time 0.02840 CA: Average # answers per Q
52
52 “Offline” vs. “Online” Prediction Offline prediction: – All features( question, answer, asker & category) – F1: 0.77 Online prediction: – NO answer features – Only asker history and question features (stars, #comments, sum of votes…) – F1: 0.74
53
53 Feature Ablation PrecisionRecallF1 Selected features0.800.730.77 No question-answer features0.760.740.75 No answerer features0.760.75 No category features0.750.760.75 No asker features0.720.690.71 No question features0.680.720.70 Asker & Question features are most important. Answer quality/Answerer expertise/Category characteristics: may not be important caring or supportive answers often preferred
54
54 54 Satisfaction: varying by asker experience Group together questions from askers with the same number of previous questions Accuracy of prediction increase dramatically Reaching F1 of 0.9 for askers with >= 5 questions
55
55 Personalized Prediction of Asker Satisfaction with info Same information != same usefulness for different users! Personalized classifier achieves surprisingly good accuracy (even with just 1 previous question!) Simple strategy of grouping users by number of previous questions is more effective than other methods for users with moderate amount of history For users with >= 20 questions, textual features are more significant
56
56 Some Personalized Models
57
57 Satisfaction Prediction When Grouping Users by “Age”
58
58 Self-Selection: First Experience Crucial Days as member vs. rating # prev questions vs. rating
59
59 Summary Asker satisfaction is predictable Asker satisfaction is predictable – Can achieve higher than human accuracy by exploiting interaction history User’s experience is important User’s experience is important General model: one-size-fits-all General model: one-size-fits-all – 2000 questions for training model are enough Personalized satisfaction prediction: Personalized satisfaction prediction: – Helps with sufficient data (>= 1 prev interactions, can observe text patterns with >=20 prev. interactions)
60
Problems Sparsity: most users post only a single question Cold start problem CF: individualize content, no (visible) rating history – C.f: Digg: ratings are public Subjective information needs 60
61
61
62
62 Subjectivity in CQA How can we exploit structure of CQA to improve question classification? Case Study: Question Subjectivity Prediction – Subjective: Has anyone got one of those home blood pressure monitors? and if so what make is it and do you think they are worth getting? – Objective: What is the difference between chemotherapy and radiation treatments? 62 B. Li, Y. Liu, and E. Agichtein, CoCQA: Co-Training Over Questions and Answers with an Application to Predicting Question Subjectivity Orientation, in EMNLP 2008
63
63 Dataset Statistics (~1000 questions) http://ir.mathcs.emory.edu/shared/ Objective Subjective
64
64 Key Observations Analysis of real questions in CQA is challenging: – Typically complex and subjective – Can be ill-phrased and vague – Not enough annotated data Idea: – Can we utilize the inherent structure of the CQA interactions, and use unlabeled CQA data to improve classification performance? 64
65
65 Natural Approach: Co-Training Introduced in: – Combining labeled and unlabeled data with co- training, Blum and Mitchell, 1998 Two views of the data – E.g.: content and hyperlinks in web pages Provide complementary information Iteratively construct additional labeled data 65
66
66 Questions and Answers: Two Views Example: – Q: Has anyone got one of those home blood pressure monitors? and if so what make is it and do you think they are worth getting? – A: My mom has one as she is diabetic so its important for her to monitor it she finds it useful. Answers usually match/fit question – My mom… she finds… Askers can usually identify matching answers by selecting the “best answer” 66
67
67 CoCQA: A Co-Training Framework over Questions and Answers 67 Labeled Data CQCQ CQCQ CACA CACA Q A Unlabeled Data ?????????? Unlabeled Data ?????????? Q A +--++-- --++--+ Unlabeled Data ?????????? Unlabeled Data ?????????? Validation (Holdout training data) Validation (Holdout training data) Classify Stop
68
68 Results Summary Features FeaturesMethod QuestionQuestion+ Best Answer Supervised0.7170.695 GE 0.712 (-0.7%) 0.717 (+3.2%) CoCQA 0.731 (+1.9%) 0.745 (+7.2%) 68
69
69 CoCQA for varying amount of labeled data 69
70
70 Summary User-generated Content – Growing – Important: impact on main-stream media, scholarly publishing, … – Can provide insight into information seeking and social processes – “Training” data for IR, machine learning, NLP, …. – Need to re-think quality, impact, usefulness
71
71 Current work Intelligently route a question to ``good’’ answerers Improve web search ranking by incorporating CQA data ``Cost’’ models for CQA-based question processing vs. other methods Dynamics of User Feedbacks Discourse analysis Discourse analysis
72
72 Takeaways People specify their information need fully when they know humans are on the other end Next generation of search must be able to cope with complex, subjective, and personal information needs To move beyond relevance, must be able to model user satisfaction CQA generates rich data to allow us (and other researchers) to study user satisfaction, interactions, intent for real users
73
Estimating contributor expertise [CIKM 2007] Estimating content quality [WSDM 2008] Inferring asker intent [EMNLP 2008] Predicting satisfaction [SIGIR 2008, ACL 2008] Matching askers with answerers Searching CQA archives [WWW 2008] Coping with spam [AIRWeb 2008] Thank you! http://www.mathcs.emory.edu/~eugene
74
Backup Slides
75
75 75 Question-Answer Features Q: length, posting time… QA: length, KL divergence Q:Votes Q:Terms
76
76 76 User Features U: Member since U: Total points U: #Questions U: #Answers
77
77 77 Category Features CA: Average time to close a question CA: Average time to close a question CA: Average # answers per question CA: Average # answers per question CA: Average asker rating CA: Average asker rating CA: Average voter rating CA: Average voter rating CA: Average # questions per hour CA: Average # questions per hour CA: Average # answers per hour CA: Average # answers per hour Category#Q#A #A per Q Satisfied Avg asker rating Time to close by asker General Health 1347375.4670.4%4.49 1 day and 13 hours
78
Backup slides
79
79 79 Prediction Methods Heuristic: # answers Heuristic: # answers Baseline: guess the majority class (satisfied). Baseline: guess the majority class (satisfied). ASP: (our system) ASP: (our system) ASP_SVM: Our system with the SVM classifier ASP_SVM: Our system with the SVM classifier ASP_C4.5: with the C4.5 classifier ASP_C4.5: with the C4.5 classifier ASP_RandomForest: with the RandomForest classifier ASP_RandomForest: with the RandomForest classifier ASP_Boosting: with the AdaBoost algorithm combining weak learners ASP_Boosting: with the AdaBoost algorithm combining weak learners ASP_NaiveBayes: with the Naive Bayes classifier ASP_NaiveBayes: with the Naive Bayes classifier …
80
80 80 Satisfaction Prediction: Human Perf (Cont’d): Amazon Mechanical Turk Methodology Methodology – Used the same 130 questions – For each question, list the best answer, as well as other four answers ordered by votes – Five independent raters for each question. – Agreement: 0.9 F1: 0.61. – Best accuracy achieved when at least 4 out of 5 raters predicted asker to be ‘satisfied’ (otherwise, labeled as “unsatisfied”).
81
81 81 Some Results
82
82 Details of CoCQA implementation Base classifier – LibSVM Term Frequency as Term Weight – Also tried Binary, TF*IDF Select top K examples with highest confidence – Margin value in SVM 82
83
83 Feature Set Character 3-grams – has, any, nyo, yon, one… Words – Has, anyone, got, mom, she, finds… Word with Character 3-grams Word n-grams (n<=3, i.e. W i, W i W i+1, W i W i+1 W i+2 ) – Has anyone got, anyone got one, she finds it… Word and POS n-gram (n<=3, i.e. W i, W i W i+1, W i POS i+1, POS i W i+1, POS i POS i+1, etc.) – NP VBP, She PRP, VBP finds… 83
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.