Prof. Panos Ipeirotis Search and the New Economy Session 5 Mining User-Generated Content.

Slides:



Advertisements
Similar presentations
Mobile and Internet Systems Group Reputation Premiums in Electronic Peer-to-Peer Markets: Analyzing Textual Feedback and Network Structure Presenter: Sean.
Advertisements

Panos Ipeirotis Stern School of Business New York University Analyzing User-Generated Content using Econometrics.
Big Data Stupid Decisions The Importance Of Measuring What We Should Be Measuring Stern School of Business, New York University “A Computer.
Back to Table of Contents
Show me the Money! Deriving the Pricing Power of Product Features by Mining Consumer Reviews. Nikolay Archak, Anindya Ghose, Panagiotis Ipeirotis New York.
Opinion Spam and Analysis Nitin Jindal and Bing Liu Department of Computer Science University of Illinois at Chicago.
Extract from various presentations: Bing Liu, Aditya Joshi, Aster Data … Sentiment Analysis January 2012.
Anindya Ghose Panos Ipeirotis Arun Sundararajan Stern School of Business New York University Opinion Mining using Econometrics A Case Study on Reputation.
CIS630 Spring 2013 Lecture 2 Affect analysis in text and speech.
Goal 1: Define marketing and the marketing process.
What is Marketing? Marketing Defined:
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Mining and Summarizing Customer Reviews Advisor : Dr.
Predicting Text Quality for Scientific Articles AAAI/SIGART-11 Doctoral Consortium Annie Louis : Louis A. and Nenkova A Automatically.
Learning Goals Define marketing and the marketing process.
1 Web Marketing Research Hsinchun Chen May Overview Sentiment index: Michigan Consumer Sentiment Survey, BrandIndex.com Marketing tools: MarketTools,
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
Recommender systems Ram Akella November 26 th 2008.
Marketing Research Unit 7.
Virtual Business: Retailing
Mining and Searching Opinions in User-Generated Contents Bing Liu Department of Computer Science University of Illinois at Chicago.
Get Another Label? Using Multiple, Noisy Labelers Joint work with Victor Sheng and Foster Provost Panos Ipeirotis Stern School of Business New York University.
Panos Ipeirotis Stern School of Business New York University Analyzing User-Generated Content using Econometrics.
Panos Ipeirotis Stern School of Business New York University Structuring and querying online opinions using econometrics.
Designing Ranking Systems for Hotels on Travel Search Engines by Mining User-Generated and Crowd sourced Content Author - Anindya Ghose, Panagiotis G.
Nikolay Archak,Anindya Ghose,Panagiotis G. Ipeirotis Class Presentation By: Arunava Bhattacharya.
Mining and Summarizing Customer Reviews
Mining and Summarizing Customer Reviews Minqing Hu and Bing Liu University of Illinois SIGKDD 2004.
Panos Ipeirotis Stern School of Business New York University Opinion Mining using Econometrics A Case Study on Reputation Systems Join work with Anindya.
«Tag-based Social Interest Discovery» Proceedings of the 17th International World Wide Web Conference (WWW2008) Xin Li, Lei Guo, Yihong Zhao Yahoo! Inc.,
Self-Selection. Self-Selection and Information Role of Online Product Reviews Xinxin Li, Lorin Hitt The Wharton School, University of Pennsylvania Workshop.
How do you know?: Interpreting and Analyzing Data NCLC 203 New Century College, George Mason University April 6, 2010.
Estimating Potentials and Forecasting Sales
Qantas Brand Refresh Kristy Dixon – Masters of Applied Project Management University of Adelaide 2013 Results of Risk Analysis Plan Hypothetical Project.
Clarus MarketMetrics® Report Talking Points. Median Price Report.
Opinion Mining Using Econometrics: A Case Study on Reputation Systems Anindya Ghose, Panagiotis G. Ipeirotis, and Arun Sundararajan Leonard N. Stern School.
Panos Ipeirotis Stern School of Business New York University Opinion Mining Using Econometrics.
Panos Ipeirotis New York University Opinion Mining using Econometrics A Case Study on Reputation Systems Joint work with Anindya Ghose and Arun Sundararajan.
Learning Goals Define marketing and the marketing process.
Eng.Mosab I. Tabash Applied Statistics. Eng.Mosab I. Tabash Session 1 : Lesson 1 IntroductiontoStatisticsIntroductiontoStatistics.
Analyzing and Interpreting Quantitative Data
Understanding customer expectations and perceptions
Designing Ranking Systems for Consumer Reviews: The Economic Impact of Customer Sentiment in Electronic Markets Anindya Ghose Panagiotis Ipeirotis Stern.
Marketing Research Marketing Information Systems.
Panos Ipeirotis New York University Opinion Mining using Econometrics A Case Study on Reputation Systems Joint work with Anindya Ghose and Arun Sundararajan.
Direct Teacher: Professor Ng Reporter: Cindy Pineapple 1 Summarized from :
Market Research The key to the customers wallet …..
Panos Ipeirotis Stern School of Business New York University Text Mining of Electronic News Content for Economic Research “On the Record”: A Forum on Electronic.
Panos Ipeirotis Stern School of Business New York University Opinion Mining using Econometrics A Case Study on Reputation Systems Join work with Anindya.
Data Mining Algorithms for Large-Scale Distributed Systems Presenter: Ran Wolff Joint work with Assaf Schuster 2003.
*Erasmus University Rotterdam P.O. Box 1738, NL-3000 DR Rotterdam, the Netherlands † Teezir BV Wilhelminapark 46, NL-3581 NL, Utrecht, the Netherlands.
C M Clarke-Hill1 Analysing Quantitative Data Forming the Hypothesis Inferential Methods - an overview Research Methods.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
Data Analysis Econ 176, Fall Populations When we run an experiment, we are always measuring an outcome, x. We say that an outcome belongs to some.
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Chapter 6: Analyzing and Interpreting Quantitative Data
4. Marketing research After carefully studying this chapter, you should be able to: Define marketing research; Identify and explain the major forms of.
1 Adaptive Subjective Triggers for Opinionated Document Retrieval (WSDM 09’) Kazuhiro Seki, Kuniaki Uehara Date: 11/02/09 Speaker: Hsu, Yu-Wen Advisor:
Chapter 6 Managing E-Service Quality What is E-Service Quality? Why it Matters How to Improve It JW:sel#5.
Show Me the Money! Deriving the Pricing Power of Product Features by Mining Consumer Reviews Nikolay Archak, Anindya Ghose, and Panagiotis G. Ipeirotis.
Click to add text 4.1 The Role of Marketing. What is Marketing?  The management task that links the business to the customer by identifying and meeting.
 Marketing Information System: A set of procedures and methods that regularly generates, stores, analyzes, and distributes information for use in making.
Sports Market Research. Know Your Customer How do businesses know their customers needs and wants?  Ask them/talking to customers  Surveys  Questionnaires.
SOCIAL MEDIA METRICS Chapter 10 Copyright © 2013 Pearson Education, Inc. publishing as Prentice Hall 1-10.
Chapter 2: Thinking and Reading Critically ENG 113: Composition I.
Chapter 3 Building Business Intelligence Chapter 3 DATABASES AND DATA WAREHOUSES Building Business Intelligence 6/22/2016 1Management Information Systems.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
UNIT – V BUSINESS ANALYTICS
Erasmus University Rotterdam
Aspect-based sentiment analysis
GhostLink: Latent Network Inference for Influence-aware Recommendation
Presentation transcript:

Prof. Panos Ipeirotis Search and the New Economy Session 5 Mining User-Generated Content

Today’s Objectives Tracking preferences using social networks –Facebook API –Trend tracking using Facebook Mining positive and negative opinions –Sentiment classification for product reviews –Feature-specific opinion tracking Economic-aware opinion mining –Reputation systems in marketplaces –Quantifying sentiment using econometrics

Top-10, Zeitgeist, Pulse, … Tracking top preferences have been around for ever

Online Social Networking Sites Preferences listed and easily accessible

Facebook API Content easily extractable Easy to “slice and dice” –List the top-5 books for 30-year old New Yorkers –List the book that had the highest increase across female population last week –…

Demo

Today’s Objectives Tracking preferences using social networks –Facebook API –Trend tracking using Facebook Mining positive and negative opinions –Sentiment classification for product reviews –Feature-specific opinion tracking Economic-aware opinion mining –Reputation systems in marketplaces –Quantifying sentiment using econometrics

Customer-generated Reviews Amazon.com started with books Today there are review sites for almost everything In contrast to “favorites” we can get information for less popular products

Questions Are reviews representative? How do people express sentiment?

Rating (1 … 5 stars) Helpfulness of review (by other customers) Review

Do People Trust Reviews? Law of large numbers: single review no, multiple ones, yes Peer feedback: number of useful votes Perceived usefulness is affected by: –Identity disclosure: Users trust real people –Mixture of objective and subjective elements –Readability, grammaticality Negative reviews that are useful may increase sales! (Why?)

Are Reviews Representative? counts Guess? What is the Shape of the Distribution of Number of Stars?

Observation 1: Reporting Bias Observation 1: Reporting Bias counts Why? Implications for WOM strategy?

Possible Reasons for Biases People don’t like to be critical People do not post if they do not feel strongly about the product (positively or negatively)

Observation 2: The SpongeBob Effect SpongeBob Squarepants Oscar versus

Oscar Winners Average Rating 3.7 Stars

SpongeBob DVDs Average Rating 4.1 Stars

And the Winner is… SpongeBob! If SpongeBob effect is common, then ratings do not accurately signal the quality of the resource

What is Happening Here? People choose movies they think they will like, and often they are right –Ratings only tell us that “fans of SpongeBob like SpongeBob” –Self-selection Oscar winners draw a wider audience –Rating is much more representative of the general population When SpongeBob gets a wider audience, his ratings drop Title# RatingsAve SpongeBob Season Tide and Seek SpongeBob the Movie 21, Home Sweet Pineapple Fear of a Krabby Patty

Effect of Self-Selection: Example 10 people see SpongeBob’s 4-star ratings –3 are already SpongeBob fans, rent movie, award 5 stars –6 already know they don’t like SpongeBob, do not see movie –Last person doesn’t know SpongeBob, impressed by high ratings, rents movie, rates it 1-star Result: Average rating remains unchanged: ( )/4 = 4 stars 9 of 10 consumers did not really need rating system Only consumer who actually used the rating system was misled

Bias-Resistant Reputation System Want P(S) but we collect data on P(S|R) S = Are satisfied with resource R = Resource selected (and reviewed) However, P(S|E)  P(S|E,R) E = Expects that will like the resource –Likelihood of satisfaction depends primarily on expectation of satisfaction, not on the selection decision –If we can collect prior expectation, the gap between evaluation group and feedback group disappears whether you select the resource or not doesn’t matter

Bias-Resistant Reputation System Before viewing: I think I will:  Love this movie  Like this movie  It will be just OK  Somewhat dislike this movie  Hate this movie After viewing: I liked this movie:  Much more than expected  More than expected  About the same as I expected  Less than I expected  Much less than I expected Big fans Everyone else Skeptics

Conclusions 1.Reporting bias and Self-selection bias exists in most cases of consumer choice 2.Bias means that user ratings do not reflect the distribution of satisfaction in the evaluation group –Consumers have no idea what “discount” to apply to ratings to get a true idea of quality 3.Many current rating systems may be self- defeating –Accurate ratings promote self-selection, which leads to inaccurate ratings 4.Collecting prior expectations may help address this problem

OK, we know the biases Can we get more knowledge? Can we dig deeper than the numeric ratings? –“Read the reviews!” –“They are too many!”

Independent Sentiment Analysis Often we need to analyze opinions –Can we provide review summaries? –What should the summary be?

Basic Sentiment classification Classify full documents (e.g., reviews, blog postings) based on the overall sentiment –Positive, negative and (possibly) neutral Similar but also different from topic-based text classification. –In topic-based classification, topic words are important Diabetes, cholesterol  health Election, votes  politics –In sentiment classification, sentiment words are more important, e.g., great, excellent, horrible, bad, worst, etc. –Sentiment words are usually adjectives or adverbs or some specific expressions (“it rocks”, “it sucks” etc.) Useful when doing aggregate analysis

Can we go further? Sentiment classification is useful, but it does not find what the reviewer liked and disliked. –Negative sentiment does not mean that the reviewer does not like anything about the object. –Positive sentiment does not mean that the reviewer likes everything Go to the sentence level and feature level

Extraction of features Two types of features: explicit and implicit Explicit features are mentioned and evaluated directly –“The pictures are very clear.” –Explicit feature: picture Implicit features are evaluated but not mentioned –“It is small enough to fit easily in a coat pocket or purse.” –Implicit feature: size Extraction: Frequency based approach –Focusing on frequent features (main features) –Infrequent features can be listed as well

Identify opinion orientation of features Using sentiment words and phrases –Identify words that are often used to express positive or negative sentiments –There are many ways ( dictionaries, WorldNet, collocation with known adjectives,… ) Use orientation of opinion words as the sentence orientation, e.g., –Sum: a negative word is near the feature, -1, a positive word is near a feature, +1

Two types of evaluations Direct Opinions: sentiment expressions on some objects/entities, e.g., products, events, topics, individuals, organizations, etc –E.g., “the picture quality of this camera is great” –Subjective Comparisons: relations expressing similarities, differences, or ordering of more than one objects. –E.g., “car x is cheaper than car y.” –Objective or subjective –Compares feature quality –Compares feature existence

Visual Summarization & Comparison Summary PictureBatterySizeWeightZoom + _ Comparison _ + Digital camera 1 Digital camera 2

Example: iPod vs. Zune

Today’s Objectives Tracking preferences using social networks –Facebook API –Trend tracking using Facebook Mining positive and negative opinions –Sentiment classification for product reviews –Feature-specific opinion tracking Economic-aware opinion mining –Reputation systems in marketplaces –Quantifying sentiment using econometrics

Comparative Shopping in e-Marketplaces

Customers Rarely Buy Cheapest Item

Are Customers Irrational? $11.04 BuyDig.com gets Price Premium (customers pay more than the minimum price)

Price Amazon Are Customers Irrational (?)

Why not Buying the Cheapest? You buy more than a product  Customers do not pay only for the product  Customers also pay for a set of fulfillment characteristics  Delivery  Packaging  Responsiveness  … Customers care about reputation of sellers! Reputation Systems are Review Systems for Humans

Example of a reputation profile

Basic idea Conjecture: Price premiums measure reputation Reputation is captured in text feedback Examine how text affects price premiums (and do sentiment analysis as a side effect)

Outline How we capture price premiums How we structure text feedback How we connect price premiums and text

Data Overview  Panel of 280 software products sold by Amazon.com X 180 days  Data from “used goods” market  Amazon Web services facilitate capturing transactions  No need for any proprietary Amazon data

Data: Secondary Marketplace

Data: Capturing Transactions time Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 We repeatedly “crawl” the marketplace using Amazon Web Services While listing appears  item is still available  no sale

Data: Capturing Transactions time Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 We repeatedly “crawl” the marketplace using Amazon Web Services When listing disappears  item sold

Capturing transactions and “price premiums” Data: Transactions Seller ListingItemPrice When item is sold, listing disappears

Capturing transactions and “price premiums” Data: Transactions While listing appears, item is still available time Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10

Capturing transactions and “price premiums” Data: Transactions While listing appears, item is still available time Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 Item still not sold on 1/7

Capturing transactions and “price premiums” Data: Transactions When item is sold, listing disappears time Item sold on 1/9 Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10

Data: Variables of Interest Price Premium  Difference of price charged by a seller minus listed price of a competitor Price Premium = (Seller Price – Competitor Price)  Calculated for each seller-competitor pair, for each transaction  Each transaction generates M observations, (M: number of competing sellers) Alternative Definitions:  Average Price Premium (one per transaction)  Relative Price Premium (relative to seller price)  Average Relative Price Premium (combination of the above)

Price Amazon

Average price Amazon

Relative Price Premiums

Average Relative Price Premiums

Outline How we capture price premiums How we structure text feedback How we connect price premiums and text

Decomposing Reputation Is reputation just a scalar metric?  Many studies assumed a “monolithic” reputation  Instead, break down reputation in individual components  Sellers characterized by a set of fulfillment characteristics (packaging, delivery, and so on) What are these characteristics (valued by consumers?)  We think of each characteristic as a dimension, represented by a noun, noun phrase, verb or verbal phrase (“shipping”, “packaging”, “delivery”, “arrived”)  Use (simple) Natural Language Processing tools  Scan the textual feedback to discover these dimensions

Decomposing and Scoring Reputation Decomposing and scoring reputation  We think of each characteristic as a dimension, represented by a noun or verb phrase (“shipping”, “packaging”, “delivery”, “arrived”)  The sellers are rated on these dimensions by buyers using modifiers (adjectives or adverbs), not numerical scores  “Fast shipping!”  “Great packaging”  “Awesome unresponsiveness”  “Unbelievable delays”  “Unbelievable price” How can we find out the meaning of these adjectives?

Structuring Feedback Text: Example Parsing the feedback P1: I was impressed by the speedy delivery! Great Service! P2: The item arrived in awful packaging, but the delivery was speedy Deriving reputation score  We assume that a modifier assigns a “score” to a dimension  α(μ, k): score associated when modifier μ evaluates the k-th dimension  w(k): weight of the k-th dimension  Thus, the overall (text) reputation score Π(i) is a sum: Π(i) =2*α (speedy, delivery)* weight(delivery)+ 1*α (great, service)* weight(service) + 1*α (awful, packaging)* weight(packaging) unknown unknown?

Outline How we capture price premiums How we structure text feedback How we connect price premiums and text

Sentiment Scoring with Regressions Scoring the dimensions  Use price premiums as “true” reputation score Π(i)  Use regression to assess scores (coefficients) Regressions  Control for all variables that affect price premiums  Control for all numeric scores of reputation  Examine effect of text: E.g., seller with “fast delivery” has premium $10 over seller with “slow delivery”, everything else being equal  “fast delivery” is $10 better than “slow delivery” estimated coefficients Π(i) =2*α (speedy, delivery)* weight(delivery)+ 1*α (great, service)* weight(service) + 1*α (awful, packaging)* weight(packaging) Price Premium

Some Indicative Dollar Values Positive Negative Natural method for extracting sentiment strength and polarity good packaging -$0.56 Naturally captures the pragmatic meaning within the given context captures misspellings as well Positive? Negative ?

Results Some dimensions that matter  Delivery and contract fulfillment (extent and speed)  Product quality and appropriate description  Packaging  Customer service  Price (!)  Responsiveness/Communication (speed and quality)  Overall feeling (transaction)

More Results Further evidence: Who will make the sale?  Classifier that predicts sale given set of sellers  Binary decision between seller and competitor  Used Decision Trees (for interpretability)  Training on data from Oct-Jan, Test on data from Feb-Mar  Only prices and product characteristics: 55%  + numerical reputation (stars), lifetime: 74%  + encoded textual information: 89%  text only: 87% Text carries more information than the numeric metrics

Other applications Summarize and query reputation data  Give me all merchants that deliver fast SELECT merchant FROM reputation WHERE delivery > ‘fast’  Summarize reputation of seller XYZ Inc.  Delivery: 3.8/5  Responsiveness: 4.8/5  Packaging: 4.9/5 Pricing reputation  Given the competition, merchant XYZ can charge $20 more and still make the sale (confidence: 83%)

Seller: uCameraSite.com 1.Canon Powershot x300 2.Kodak - EasyShare 5.0MP 3.Nikon - Coolpix 5.1MP 4.Fuji FinePix Canon PowerShot x900 Your last 5 transactions in Cameras Name of productPrice Seller 1 - $431 Seller 2 - $409 You - $399 Seller 3 - $382 Seller 4-$379 Seller 5-$376 Canon Powershot x300 Your competitive landscape Product Price ( reputation ) (4.8) (4.65) (4.7) (3.9) (3.6) (3.4) Your Price: $399 Your Reputation Price: $419 Your Reputation Premium: $20 (5%) $20 Left on the table Reputation Pricing Tool for Sellers

Quantitatively Understand & Manage Seller Reputation How your customers see you relative to other sellers: 35%* 69% 89% 82% 95% Service Packaging Delivery Overall Quality Dimensions of your reputation and the relative importance to your customers: Service Packaging Delivery Quality Other * Percentile of all merchants RSI Products Automatically Identify the Dimensions of Reputation from Textual Feedback Dimensions are Quantified Relative to Other Sellers and Relative to Buyer Importance Sellers can Understand their Key Dimensions of Reputation and Manage them over Time Arms Sellers with Vital Info to Compete on Reputation Dimensions other than Low Price. Tool for Seller Reputation Management

Marketplace Search Used Market (ex: Amazon) Price Range $250-$300 Seller 1Seller 2 Seller 4Seller 3 Sort by Price/Service/Delivery/other dimensions Canon PS SD700 Service Packaging Delivery Price Dimension Comparison Seller 1 PriceServicePackageDelivery Seller 2 Seller 3 Seller 4 Seller 5 Seller 6 Seller 7 Tool for Buyers

Summary User feedback defines reputation → price premiums Generalize: User-generated-content affects “markets” Reviews and product sales News/blogs and elections

Examine changes in demand and estimate weights of features and strength of evaluations Product Reviews and Product Sales “poor lenses” +3% “excellent lenses” -1% “poor photos” +6% “excellent photos” - 2%  Feature “photos” is two time more important than “lenses”  “Excellent” is positive, “poor” is negative  “Excellent” is three times stronger than “poor”

Question: Reviews and Ads How? Is your strategy incentive-compatible? Given product review summaries (potentially with economic impact), can we improve ad generation?

Sentiment & Presidential Election

Political News and Prediction Markets Hillary Clinton

Political News and Prediction Markets

Hillary Clinton, Feb 2 nd

Political News and Prediction Markets Mitt Romney

Political News and Prediction Markets

Mitt Romney, Feb 2 nd

Summary We can quantify unstructured, qualitative data. We need: A context in which content is influential and not redundant (experiential content for instance) A measurable economic variable: price (premium), demand, cost, customer satisfaction, process cycle time Methods for structuring unstructured content Methods for aggregating the variables in a business context-aware manner

Question: What needs to be done for other types of USG? –Structuring: Opinions are expressed in many ways –Independent summaries: Not all scenarios have associated economic outcomes, or difficult to measure (e.g., discussion about product pre-announcement) –Personalization: The weight of the opinion of each person varies (interesting future direction!) –Data collection: Rarely evaluations are in one place